Nov 26 07:00:26 crc systemd[1]: Starting Kubernetes Kubelet... Nov 26 07:00:26 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:26 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 07:00:27 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 26 07:00:28 crc kubenswrapper[4909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 07:00:28 crc kubenswrapper[4909]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 26 07:00:28 crc kubenswrapper[4909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 07:00:28 crc kubenswrapper[4909]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 07:00:28 crc kubenswrapper[4909]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 26 07:00:28 crc kubenswrapper[4909]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.266506 4909 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272268 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272320 4909 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272331 4909 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272339 4909 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272348 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272356 4909 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272367 4909 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272377 4909 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272386 4909 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272394 4909 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272402 4909 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272410 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272420 4909 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272430 4909 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272438 4909 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272447 4909 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272455 4909 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272464 4909 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272471 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272479 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272487 4909 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272501 4909 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272509 4909 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272517 4909 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272525 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272533 4909 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272540 4909 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272548 4909 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272555 4909 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272563 4909 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272571 4909 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272579 4909 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272586 4909 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272594 4909 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272601 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272609 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272624 4909 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272654 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272662 4909 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272669 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272677 4909 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272685 4909 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272693 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272702 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272709 4909 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272717 4909 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272725 4909 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272732 4909 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272742 4909 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272749 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272757 4909 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272765 4909 feature_gate.go:330] unrecognized feature gate: Example Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272772 4909 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272780 4909 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272790 4909 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272801 4909 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272810 4909 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272819 4909 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272827 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272834 4909 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272842 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272849 4909 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272857 4909 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272865 4909 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272872 4909 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272880 4909 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272888 4909 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272896 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272904 4909 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272914 4909 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.272922 4909 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275171 4909 flags.go:64] FLAG: --address="0.0.0.0" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275196 4909 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275213 4909 flags.go:64] FLAG: --anonymous-auth="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275224 4909 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275236 4909 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275245 4909 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275257 4909 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275268 4909 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275278 4909 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275287 4909 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275297 4909 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275307 4909 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275316 4909 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275325 4909 flags.go:64] FLAG: --cgroup-root="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275334 4909 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275342 4909 flags.go:64] FLAG: --client-ca-file="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275352 4909 flags.go:64] FLAG: --cloud-config="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275362 4909 flags.go:64] FLAG: --cloud-provider="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275370 4909 flags.go:64] FLAG: --cluster-dns="[]" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275380 4909 flags.go:64] FLAG: --cluster-domain="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275389 4909 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275398 4909 flags.go:64] FLAG: --config-dir="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275408 4909 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275448 4909 flags.go:64] FLAG: --container-log-max-files="5" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275465 4909 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275478 4909 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275490 4909 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275503 4909 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275516 4909 flags.go:64] FLAG: --contention-profiling="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275527 4909 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275538 4909 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275550 4909 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275559 4909 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275571 4909 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275580 4909 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275595 4909 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275605 4909 flags.go:64] FLAG: --enable-load-reader="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275652 4909 flags.go:64] FLAG: --enable-server="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275662 4909 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275675 4909 flags.go:64] FLAG: --event-burst="100" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275685 4909 flags.go:64] FLAG: --event-qps="50" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275694 4909 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275703 4909 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275713 4909 flags.go:64] FLAG: --eviction-hard="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275737 4909 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275746 4909 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275755 4909 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275764 4909 flags.go:64] FLAG: --eviction-soft="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275774 4909 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275783 4909 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275792 4909 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275801 4909 flags.go:64] FLAG: --experimental-mounter-path="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275810 4909 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275819 4909 flags.go:64] FLAG: --fail-swap-on="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275828 4909 flags.go:64] FLAG: --feature-gates="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275838 4909 flags.go:64] FLAG: --file-check-frequency="20s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275847 4909 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275858 4909 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275867 4909 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275877 4909 flags.go:64] FLAG: --healthz-port="10248" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275886 4909 flags.go:64] FLAG: --help="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275895 4909 flags.go:64] FLAG: --hostname-override="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275903 4909 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275912 4909 flags.go:64] FLAG: --http-check-frequency="20s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275921 4909 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275930 4909 flags.go:64] FLAG: --image-credential-provider-config="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275938 4909 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275948 4909 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275958 4909 flags.go:64] FLAG: --image-service-endpoint="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275966 4909 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275976 4909 flags.go:64] FLAG: --kube-api-burst="100" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275985 4909 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.275994 4909 flags.go:64] FLAG: --kube-api-qps="50" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276002 4909 flags.go:64] FLAG: --kube-reserved="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276011 4909 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276020 4909 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276029 4909 flags.go:64] FLAG: --kubelet-cgroups="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276040 4909 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276049 4909 flags.go:64] FLAG: --lock-file="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276058 4909 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276068 4909 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276077 4909 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276090 4909 flags.go:64] FLAG: --log-json-split-stream="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276098 4909 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276107 4909 flags.go:64] FLAG: --log-text-split-stream="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276117 4909 flags.go:64] FLAG: --logging-format="text" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276125 4909 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276135 4909 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276143 4909 flags.go:64] FLAG: --manifest-url="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276152 4909 flags.go:64] FLAG: --manifest-url-header="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276164 4909 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276173 4909 flags.go:64] FLAG: --max-open-files="1000000" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276184 4909 flags.go:64] FLAG: --max-pods="110" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276193 4909 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276202 4909 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276211 4909 flags.go:64] FLAG: --memory-manager-policy="None" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276220 4909 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276229 4909 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276238 4909 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276247 4909 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276267 4909 flags.go:64] FLAG: --node-status-max-images="50" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276276 4909 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276285 4909 flags.go:64] FLAG: --oom-score-adj="-999" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276295 4909 flags.go:64] FLAG: --pod-cidr="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276304 4909 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276317 4909 flags.go:64] FLAG: --pod-manifest-path="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276326 4909 flags.go:64] FLAG: --pod-max-pids="-1" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276335 4909 flags.go:64] FLAG: --pods-per-core="0" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276344 4909 flags.go:64] FLAG: --port="10250" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276358 4909 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276367 4909 flags.go:64] FLAG: --provider-id="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276377 4909 flags.go:64] FLAG: --qos-reserved="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276386 4909 flags.go:64] FLAG: --read-only-port="10255" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276395 4909 flags.go:64] FLAG: --register-node="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276404 4909 flags.go:64] FLAG: --register-schedulable="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276413 4909 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276427 4909 flags.go:64] FLAG: --registry-burst="10" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276436 4909 flags.go:64] FLAG: --registry-qps="5" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276453 4909 flags.go:64] FLAG: --reserved-cpus="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276461 4909 flags.go:64] FLAG: --reserved-memory="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276472 4909 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276517 4909 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276526 4909 flags.go:64] FLAG: --rotate-certificates="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276535 4909 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276544 4909 flags.go:64] FLAG: --runonce="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276553 4909 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276561 4909 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276571 4909 flags.go:64] FLAG: --seccomp-default="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276580 4909 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276589 4909 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276598 4909 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276607 4909 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276617 4909 flags.go:64] FLAG: --storage-driver-password="root" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276666 4909 flags.go:64] FLAG: --storage-driver-secure="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276676 4909 flags.go:64] FLAG: --storage-driver-table="stats" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276685 4909 flags.go:64] FLAG: --storage-driver-user="root" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276694 4909 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276703 4909 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276712 4909 flags.go:64] FLAG: --system-cgroups="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276722 4909 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276737 4909 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276751 4909 flags.go:64] FLAG: --tls-cert-file="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276762 4909 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276773 4909 flags.go:64] FLAG: --tls-min-version="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276782 4909 flags.go:64] FLAG: --tls-private-key-file="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276791 4909 flags.go:64] FLAG: --topology-manager-policy="none" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276799 4909 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276809 4909 flags.go:64] FLAG: --topology-manager-scope="container" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276818 4909 flags.go:64] FLAG: --v="2" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276830 4909 flags.go:64] FLAG: --version="false" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276841 4909 flags.go:64] FLAG: --vmodule="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276852 4909 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.276861 4909 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277062 4909 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277071 4909 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277095 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277104 4909 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277112 4909 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277120 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277128 4909 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277136 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277143 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277151 4909 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277160 4909 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277167 4909 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277175 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277182 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277190 4909 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277198 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277205 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277213 4909 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277221 4909 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277229 4909 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277239 4909 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277250 4909 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277259 4909 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277266 4909 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277275 4909 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277282 4909 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277290 4909 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277301 4909 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277312 4909 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277322 4909 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277332 4909 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277342 4909 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277352 4909 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277361 4909 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277369 4909 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277377 4909 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277385 4909 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277394 4909 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277402 4909 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277410 4909 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277418 4909 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277427 4909 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277435 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277443 4909 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277451 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277460 4909 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277468 4909 feature_gate.go:330] unrecognized feature gate: Example Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277476 4909 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277484 4909 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277491 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277499 4909 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277506 4909 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277517 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277527 4909 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277536 4909 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277543 4909 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277551 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277561 4909 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277572 4909 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277580 4909 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277595 4909 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277603 4909 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277612 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277620 4909 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277650 4909 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277658 4909 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277666 4909 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277674 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277682 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277690 4909 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.277697 4909 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.277721 4909 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.292350 4909 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.292394 4909 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292510 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292521 4909 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292527 4909 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292533 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292538 4909 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292543 4909 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292548 4909 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292553 4909 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292557 4909 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292562 4909 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292567 4909 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292579 4909 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292586 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292597 4909 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292602 4909 feature_gate.go:330] unrecognized feature gate: Example Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292607 4909 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292612 4909 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292617 4909 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292622 4909 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292627 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292649 4909 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292657 4909 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292664 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292671 4909 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292678 4909 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292685 4909 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292693 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292707 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292713 4909 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292719 4909 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292724 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292730 4909 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292736 4909 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292743 4909 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292750 4909 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292757 4909 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292763 4909 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292770 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292776 4909 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292781 4909 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292786 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292791 4909 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292797 4909 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292802 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292807 4909 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292812 4909 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292817 4909 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292823 4909 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292829 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292835 4909 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292840 4909 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292846 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292874 4909 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292880 4909 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292886 4909 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292891 4909 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292897 4909 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292903 4909 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292910 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292916 4909 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292921 4909 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292926 4909 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292931 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292936 4909 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292941 4909 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292946 4909 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292951 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292955 4909 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292960 4909 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292965 4909 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.292970 4909 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.292981 4909 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293356 4909 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293368 4909 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293374 4909 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293380 4909 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293386 4909 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293391 4909 feature_gate.go:330] unrecognized feature gate: Example Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293397 4909 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293403 4909 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293409 4909 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293415 4909 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293421 4909 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293427 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293432 4909 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293437 4909 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293442 4909 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293448 4909 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293453 4909 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293463 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293476 4909 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293483 4909 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293489 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293496 4909 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293501 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293509 4909 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293518 4909 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293528 4909 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293535 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293541 4909 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293548 4909 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293554 4909 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293561 4909 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293569 4909 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293576 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293583 4909 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293589 4909 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293597 4909 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293604 4909 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293614 4909 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293628 4909 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293659 4909 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293666 4909 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293672 4909 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293678 4909 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293684 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293690 4909 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293697 4909 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293703 4909 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293709 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293717 4909 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293723 4909 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293729 4909 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293736 4909 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293742 4909 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293748 4909 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293753 4909 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293759 4909 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293766 4909 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293773 4909 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293778 4909 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293784 4909 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293790 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293796 4909 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293801 4909 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293807 4909 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293813 4909 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293819 4909 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293824 4909 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293830 4909 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293837 4909 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293843 4909 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.293848 4909 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.293859 4909 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.295471 4909 server.go:940] "Client rotation is on, will bootstrap in background" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.300451 4909 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.300574 4909 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.303263 4909 server.go:997] "Starting client certificate rotation" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.303300 4909 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.304067 4909 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-24 06:07:48.757620049 +0000 UTC Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.304216 4909 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 671h7m20.453407513s for next certificate rotation Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.326441 4909 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.329548 4909 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.348710 4909 log.go:25] "Validated CRI v1 runtime API" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.380680 4909 log.go:25] "Validated CRI v1 image API" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.382972 4909 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.390866 4909 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-26-06-55-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.390907 4909 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.412360 4909 manager.go:217] Machine: {Timestamp:2025-11-26 07:00:28.405426406 +0000 UTC m=+0.551637602 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b BootID:35ed46f4-00cf-47b9-9f48-1d94d36971ca Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d6:6d:56 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d6:6d:56 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:55:20:ac Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:59:08:a3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7b:4a:54 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:44:2e:56 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:2f:28:fa Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:33:cc:2c:55:3e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:06:d8:d8:4d:65 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.413074 4909 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.413392 4909 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.414000 4909 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.414565 4909 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.414627 4909 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.415870 4909 topology_manager.go:138] "Creating topology manager with none policy" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.415915 4909 container_manager_linux.go:303] "Creating device plugin manager" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.416356 4909 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.416421 4909 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.417711 4909 state_mem.go:36] "Initialized new in-memory state store" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.418406 4909 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.422175 4909 kubelet.go:418] "Attempting to sync node with API server" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.422216 4909 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.422246 4909 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.422269 4909 kubelet.go:324] "Adding apiserver pod source" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.422290 4909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.427597 4909 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.428563 4909 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.431154 4909 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.432703 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.432805 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.432722 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.432915 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.432951 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.432841 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433032 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433051 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433077 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433091 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433104 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433126 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433144 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433158 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433176 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433191 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.433255 4909 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.434181 4909 server.go:1280] "Started kubelet" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.434402 4909 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.434440 4909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.435357 4909 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.435602 4909 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.436398 4909 server.go:460] "Adding debug handlers to kubelet server" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.437212 4909 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.437251 4909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 26 07:00:28 crc systemd[1]: Started Kubernetes Kubelet. Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.438394 4909 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:59:15.957587772 +0000 UTC Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.438428 4909 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 912h58m47.519161191s for next certificate rotation Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.438630 4909 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.438792 4909 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.438837 4909 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.438881 4909 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.439716 4909 factory.go:55] Registering systemd factory Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.439756 4909 factory.go:221] Registration of the systemd container factory successfully Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.440307 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="200ms" Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.440286 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.440400 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.440757 4909 factory.go:153] Registering CRI-O factory Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.440891 4909 factory.go:221] Registration of the crio container factory successfully Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.444215 4909 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.444326 4909 factory.go:103] Registering Raw factory Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.444359 4909 manager.go:1196] Started watching for new ooms in manager Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.445902 4909 manager.go:319] Starting recovery of all containers Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.444370 4909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.206:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b7c5f72b89c0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 07:00:28.434119692 +0000 UTC m=+0.580330908,LastTimestamp:2025-11-26 07:00:28.434119692 +0000 UTC m=+0.580330908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457474 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457547 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457613 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457725 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457754 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457782 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457810 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457838 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457865 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457885 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457905 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457925 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457949 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457972 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.457993 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458013 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458035 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458055 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458077 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458096 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458115 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458135 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458153 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458173 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458195 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458217 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458242 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458265 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458319 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458340 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458360 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458386 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458406 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458456 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458483 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458509 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458532 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458555 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458666 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458690 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458719 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458744 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458770 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458796 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458814 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458834 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458853 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458871 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458894 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458915 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458934 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458954 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.458982 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459001 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459022 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459046 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459067 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459085 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459107 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459126 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459147 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459168 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459188 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459208 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459230 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459252 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459279 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459305 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459335 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459361 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459386 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.459416 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461224 4909 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461309 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461346 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461377 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461405 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461434 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461461 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461494 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461555 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461598 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461717 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461752 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461780 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461805 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461831 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461855 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461884 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461909 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461938 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.461976 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462006 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462039 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462071 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462098 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462128 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462154 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462182 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462206 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462235 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462267 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462296 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462321 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462352 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462391 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462421 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462452 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462499 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462540 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462570 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462606 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462666 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462699 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462728 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462757 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462788 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462818 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462844 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462873 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462903 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462932 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462959 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.462988 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463017 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463046 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463074 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463101 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463132 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463160 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463185 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463238 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463276 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463302 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463326 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463352 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463378 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463402 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463429 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463456 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463494 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463521 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463547 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463576 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463618 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463677 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463704 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463742 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463769 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463792 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463820 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463848 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463875 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463901 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463930 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463956 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.463982 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464009 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464038 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464065 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464094 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464118 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464148 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464176 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464203 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464227 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464258 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464285 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464310 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464335 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464364 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464400 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464469 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464515 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464543 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464569 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464598 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464663 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464692 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464722 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464750 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464775 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464805 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464838 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464864 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464893 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464920 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464946 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.464976 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465004 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465044 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465071 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465101 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465132 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465158 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465189 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465217 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465246 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465273 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465300 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465324 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465352 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465381 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465408 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465437 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465461 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465489 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465515 4909 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465541 4909 reconstruct.go:97] "Volume reconstruction finished" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.465559 4909 reconciler.go:26] "Reconciler: start to sync state" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.468351 4909 manager.go:324] Recovery completed Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.478666 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.480099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.480130 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.480141 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.481479 4909 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.481498 4909 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.481517 4909 state_mem.go:36] "Initialized new in-memory state store" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.495277 4909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.497527 4909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.497584 4909 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.497651 4909 kubelet.go:2335] "Starting kubelet main sync loop" Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.497728 4909 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 26 07:00:28 crc kubenswrapper[4909]: W1126 07:00:28.501935 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.502075 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.502158 4909 policy_none.go:49] "None policy: Start" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.504010 4909 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.504049 4909 state_mem.go:35] "Initializing new in-memory state store" Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.538919 4909 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.564924 4909 manager.go:334] "Starting Device Plugin manager" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.565053 4909 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.565070 4909 server.go:79] "Starting device plugin registration server" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.565557 4909 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.565580 4909 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.566056 4909 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.566138 4909 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.566147 4909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.578070 4909 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.598020 4909 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.598199 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.599828 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.599919 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.599950 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.600270 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.600529 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.600643 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602330 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602335 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602381 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602496 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602881 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602956 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.602964 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604480 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604540 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604560 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604496 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604632 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604794 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604846 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.604898 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.606815 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.606837 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.606847 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.607049 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.607064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.607071 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.607271 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.607866 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.607902 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.608637 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.608667 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.608677 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.608627 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.608799 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.608813 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.608979 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.609007 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.609651 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.609690 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.609704 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.641331 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="400ms" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.665741 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667238 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667289 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667320 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667325 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667346 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667368 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667379 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667385 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667403 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667426 4909 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667456 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667496 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667529 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667550 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667657 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667750 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667819 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667872 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.667903 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.668169 4909 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.206:6443: connect: connection refused" node="crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.769921 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770046 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770123 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770177 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770229 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770281 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770285 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770298 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770224 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770327 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770406 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770422 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770480 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770527 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770637 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770646 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770718 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770728 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770802 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770876 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770926 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770967 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.770974 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.771036 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.771044 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.771089 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.771087 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.771123 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.771271 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.868754 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.870863 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.870924 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.870942 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.870985 4909 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 07:00:28 crc kubenswrapper[4909]: E1126 07:00:28.873680 4909 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.206:6443: connect: connection refused" node="crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.930111 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.948942 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.965007 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.981708 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 26 07:00:28 crc kubenswrapper[4909]: I1126 07:00:28.992367 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:29 crc kubenswrapper[4909]: W1126 07:00:29.015232 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5205dbeec8eecb15b7b08b58e07240f75381260a872969a6d010c7bb2da444bb WatchSource:0}: Error finding container 5205dbeec8eecb15b7b08b58e07240f75381260a872969a6d010c7bb2da444bb: Status 404 returned error can't find the container with id 5205dbeec8eecb15b7b08b58e07240f75381260a872969a6d010c7bb2da444bb Nov 26 07:00:29 crc kubenswrapper[4909]: W1126 07:00:29.021192 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-437a653668cc1659b63a27636dbf388646bed48bea19fd56005b0587feff026b WatchSource:0}: Error finding container 437a653668cc1659b63a27636dbf388646bed48bea19fd56005b0587feff026b: Status 404 returned error can't find the container with id 437a653668cc1659b63a27636dbf388646bed48bea19fd56005b0587feff026b Nov 26 07:00:29 crc kubenswrapper[4909]: W1126 07:00:29.025468 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-739e1961ac24a046f48f4ba7cc3c2e70a37d1914d93ec9fc175b532a250a98bb WatchSource:0}: Error finding container 739e1961ac24a046f48f4ba7cc3c2e70a37d1914d93ec9fc175b532a250a98bb: Status 404 returned error can't find the container with id 739e1961ac24a046f48f4ba7cc3c2e70a37d1914d93ec9fc175b532a250a98bb Nov 26 07:00:29 crc kubenswrapper[4909]: E1126 07:00:29.042768 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="800ms" Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.274452 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.277719 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.277806 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.277828 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.277893 4909 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 07:00:29 crc kubenswrapper[4909]: E1126 07:00:29.278686 4909 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.206:6443: connect: connection refused" node="crc" Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.436702 4909 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:29 crc kubenswrapper[4909]: W1126 07:00:29.497549 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:29 crc kubenswrapper[4909]: E1126 07:00:29.497655 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.503340 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4313742674f0edafe71585210d4b2abd922ee36ee353689846a0af0dc2b1279e"} Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.504503 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5205dbeec8eecb15b7b08b58e07240f75381260a872969a6d010c7bb2da444bb"} Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.507717 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"437a653668cc1659b63a27636dbf388646bed48bea19fd56005b0587feff026b"} Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.509447 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8b4f0d573e33da508aa7d27c87f37d1f16a957b68e3e72cc11f90f17fa0fa5f"} Nov 26 07:00:29 crc kubenswrapper[4909]: I1126 07:00:29.512079 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"739e1961ac24a046f48f4ba7cc3c2e70a37d1914d93ec9fc175b532a250a98bb"} Nov 26 07:00:29 crc kubenswrapper[4909]: W1126 07:00:29.684834 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:29 crc kubenswrapper[4909]: E1126 07:00:29.684939 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:29 crc kubenswrapper[4909]: E1126 07:00:29.844851 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="1.6s" Nov 26 07:00:29 crc kubenswrapper[4909]: W1126 07:00:29.949438 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:29 crc kubenswrapper[4909]: E1126 07:00:29.949523 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:29 crc kubenswrapper[4909]: W1126 07:00:29.975217 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:29 crc kubenswrapper[4909]: E1126 07:00:29.975333 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.206:6443: connect: connection refused" logger="UnhandledError" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.079212 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.081127 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.081186 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.081205 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.081242 4909 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 07:00:30 crc kubenswrapper[4909]: E1126 07:00:30.081779 4909 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.206:6443: connect: connection refused" node="crc" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.437152 4909 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.517385 4909 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1" exitCode=0 Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.517505 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.517498 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1"} Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.519278 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2"} Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.519306 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8"} Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.519316 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c"} Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.519364 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.519402 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.519416 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.520210 4909 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1" exitCode=0 Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.520253 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1"} Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.520313 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.520941 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.520959 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.520968 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.523576 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1" exitCode=0 Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.523663 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1"} Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.523771 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.525240 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.525281 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.525299 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.526139 4909 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05e496612536b630b450fc85d61f5820ca4f202de5a566f2d20ea3f932f8ad35" exitCode=0 Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.526182 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05e496612536b630b450fc85d61f5820ca4f202de5a566f2d20ea3f932f8ad35"} Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.526270 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.527141 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.527228 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.527264 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.527280 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.528084 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.528114 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:30 crc kubenswrapper[4909]: I1126 07:00:30.528124 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.436808 4909 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.206:6443: connect: connection refused Nov 26 07:00:31 crc kubenswrapper[4909]: E1126 07:00:31.445672 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="3.2s" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.533376 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.533460 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.533479 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.533495 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.535448 4909 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05d435121671b1437a0e546aa42dace3a52c9a6991ad6112370a058d1b7a5edb" exitCode=0 Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.535523 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05d435121671b1437a0e546aa42dace3a52c9a6991ad6112370a058d1b7a5edb"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.535601 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.537271 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.537302 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.537314 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.538851 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.538942 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.540460 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.540486 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.540496 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.549764 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.549893 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.551520 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.551564 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.551579 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.558214 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.558313 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.558337 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea"} Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.558393 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.560665 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.560700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.560715 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.682020 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.683423 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.683457 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.683468 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:31 crc kubenswrapper[4909]: I1126 07:00:31.683493 4909 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 07:00:31 crc kubenswrapper[4909]: E1126 07:00:31.683925 4909 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.206:6443: connect: connection refused" node="crc" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.567648 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218"} Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.567846 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.569760 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.569846 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.569867 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.571433 4909 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5ca49f700ea24b260369ef8f63bb1501b456faaef296efded381db24dfaf9d89" exitCode=0 Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.571558 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.571579 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.571609 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.571593 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5ca49f700ea24b260369ef8f63bb1501b456faaef296efded381db24dfaf9d89"} Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.571745 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.572504 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573373 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573383 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573392 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573402 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573403 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573398 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573546 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.573572 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.575471 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.575532 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:32 crc kubenswrapper[4909]: I1126 07:00:32.575554 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.583880 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c2fee54e9eb9f8dbb2ac0b85a4d9964f264e5634137b73943bcbb49bb40d827"} Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.584236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4b56b33b39d8ffd0c1342e905f73b6a3624b0e251f017a8b191c12b2774c3901"} Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.584256 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f225c9e1da6eb976a971ac633b180045e873eee1518c52432d94c77fb8a789e8"} Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.584271 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d25931bc555c878a98a740f929e3000346bbe6a0c231fc15a44058518c19a6da"} Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.584117 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.584362 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.585630 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.585684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:33 crc kubenswrapper[4909]: I1126 07:00:33.585699 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.591559 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f1834dee057a2d2793ab0fe17998fb47a1360176072103f5c13ec23f12c68d3c"} Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.591750 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.593178 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.593228 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.593245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.681550 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.885011 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.886536 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.886576 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.886590 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.886635 4909 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.904906 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.905180 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.906793 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.906849 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:34 crc kubenswrapper[4909]: I1126 07:00:34.906868 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.593427 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.594534 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.594569 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.594583 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.706649 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.706870 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.707990 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.708026 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.708040 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.946392 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.946742 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.948058 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.948112 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:35 crc kubenswrapper[4909]: I1126 07:00:35.948131 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.140282 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.596575 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.596625 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.597700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.597731 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.597741 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.597852 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.597885 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:36 crc kubenswrapper[4909]: I1126 07:00:36.597905 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:37 crc kubenswrapper[4909]: I1126 07:00:37.304229 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:37 crc kubenswrapper[4909]: I1126 07:00:37.304464 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:37 crc kubenswrapper[4909]: I1126 07:00:37.306033 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:37 crc kubenswrapper[4909]: I1126 07:00:37.306135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:37 crc kubenswrapper[4909]: I1126 07:00:37.306156 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:38 crc kubenswrapper[4909]: E1126 07:00:38.578236 4909 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 26 07:00:38 crc kubenswrapper[4909]: I1126 07:00:38.908910 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:00:38 crc kubenswrapper[4909]: I1126 07:00:38.909189 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:38 crc kubenswrapper[4909]: I1126 07:00:38.910877 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:38 crc kubenswrapper[4909]: I1126 07:00:38.910994 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:38 crc kubenswrapper[4909]: I1126 07:00:38.911053 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:39 crc kubenswrapper[4909]: I1126 07:00:39.548940 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 26 07:00:39 crc kubenswrapper[4909]: I1126 07:00:39.549321 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:39 crc kubenswrapper[4909]: I1126 07:00:39.551584 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:39 crc kubenswrapper[4909]: I1126 07:00:39.551678 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:39 crc kubenswrapper[4909]: I1126 07:00:39.551690 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.304436 4909 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.304568 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.531328 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.531681 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.533204 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.533276 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.533300 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.540725 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.608505 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.608780 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.609982 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.610055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.610083 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:40 crc kubenswrapper[4909]: I1126 07:00:40.613826 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:41 crc kubenswrapper[4909]: I1126 07:00:41.610809 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:41 crc kubenswrapper[4909]: I1126 07:00:41.612064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:41 crc kubenswrapper[4909]: I1126 07:00:41.612118 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:41 crc kubenswrapper[4909]: I1126 07:00:41.612135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.437057 4909 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 26 07:00:42 crc kubenswrapper[4909]: W1126 07:00:42.491865 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.491996 4909 trace.go:236] Trace[1468136912]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 07:00:32.490) (total time: 10001ms): Nov 26 07:00:42 crc kubenswrapper[4909]: Trace[1468136912]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:00:42.491) Nov 26 07:00:42 crc kubenswrapper[4909]: Trace[1468136912]: [10.001406849s] [10.001406849s] END Nov 26 07:00:42 crc kubenswrapper[4909]: E1126 07:00:42.492026 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 26 07:00:42 crc kubenswrapper[4909]: W1126 07:00:42.535906 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.535995 4909 trace.go:236] Trace[220797193]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 07:00:32.534) (total time: 10001ms): Nov 26 07:00:42 crc kubenswrapper[4909]: Trace[220797193]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:00:42.535) Nov 26 07:00:42 crc kubenswrapper[4909]: Trace[220797193]: [10.001592494s] [10.001592494s] END Nov 26 07:00:42 crc kubenswrapper[4909]: E1126 07:00:42.536017 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.612959 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.614534 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.614621 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.614643 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:42 crc kubenswrapper[4909]: W1126 07:00:42.669753 4909 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.669843 4909 trace.go:236] Trace[1266071868]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 07:00:32.668) (total time: 10001ms): Nov 26 07:00:42 crc kubenswrapper[4909]: Trace[1266071868]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:00:42.669) Nov 26 07:00:42 crc kubenswrapper[4909]: Trace[1266071868]: [10.001461645s] [10.001461645s] END Nov 26 07:00:42 crc kubenswrapper[4909]: E1126 07:00:42.669866 4909 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.953067 4909 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.953135 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.957437 4909 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 26 07:00:42 crc kubenswrapper[4909]: I1126 07:00:42.957521 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 26 07:00:44 crc kubenswrapper[4909]: I1126 07:00:44.720039 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 26 07:00:44 crc kubenswrapper[4909]: I1126 07:00:44.720347 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:44 crc kubenswrapper[4909]: I1126 07:00:44.721694 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:44 crc kubenswrapper[4909]: I1126 07:00:44.721732 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:44 crc kubenswrapper[4909]: I1126 07:00:44.721740 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:44 crc kubenswrapper[4909]: I1126 07:00:44.737865 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.622153 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.623557 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.623631 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.623649 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.955634 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.955867 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.957378 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.957455 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.957479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:45 crc kubenswrapper[4909]: I1126 07:00:45.962438 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:46 crc kubenswrapper[4909]: I1126 07:00:46.143883 4909 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 07:00:46 crc kubenswrapper[4909]: I1126 07:00:46.625164 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:00:46 crc kubenswrapper[4909]: I1126 07:00:46.625244 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:46 crc kubenswrapper[4909]: I1126 07:00:46.626803 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:46 crc kubenswrapper[4909]: I1126 07:00:46.626884 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:46 crc kubenswrapper[4909]: I1126 07:00:46.626899 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:47 crc kubenswrapper[4909]: I1126 07:00:47.223257 4909 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 07:00:47 crc kubenswrapper[4909]: E1126 07:00:47.940390 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 26 07:00:47 crc kubenswrapper[4909]: I1126 07:00:47.944942 4909 trace.go:236] Trace[1566300724]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 07:00:33.137) (total time: 14807ms): Nov 26 07:00:47 crc kubenswrapper[4909]: Trace[1566300724]: ---"Objects listed" error: 14807ms (07:00:47.944) Nov 26 07:00:47 crc kubenswrapper[4909]: Trace[1566300724]: [14.807149081s] [14.807149081s] END Nov 26 07:00:47 crc kubenswrapper[4909]: I1126 07:00:47.944986 4909 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 26 07:00:47 crc kubenswrapper[4909]: E1126 07:00:47.947731 4909 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 26 07:00:47 crc kubenswrapper[4909]: I1126 07:00:47.949322 4909 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.053627 4909 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58622->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.053697 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58622->192.168.126.11:17697: read: connection reset by peer" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.054046 4909 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.054099 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.406911 4909 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.406997 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.416566 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.421720 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.435337 4909 apiserver.go:52] "Watching apiserver" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.439310 4909 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.439648 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.439984 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.440190 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.440424 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.440450 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.440551 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.440653 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.440690 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.440759 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.440802 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.441777 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.442458 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.443409 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.443555 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.443447 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.444520 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.444740 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.444921 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.445031 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.477127 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.499306 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.512949 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.525245 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.535387 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.539874 4909 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.548699 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.552852 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.552906 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.552935 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.552960 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.552982 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553003 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553023 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553044 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553066 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553088 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553108 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553128 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553150 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553169 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553190 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553211 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553230 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553253 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553273 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553294 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553317 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553339 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553360 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553383 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553403 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553425 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553446 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553467 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553488 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553509 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553528 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553549 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553622 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553644 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553673 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553694 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553716 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553739 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553793 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553817 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553838 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553871 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553896 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553918 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553925 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.553940 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554010 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554038 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554063 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554086 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554109 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554132 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554153 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554176 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554198 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554220 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554241 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554263 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554285 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554232 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554305 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554339 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554385 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554492 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554540 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554579 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554648 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554685 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554723 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554759 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554765 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554794 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554817 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554829 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554883 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554944 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554971 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.554992 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555017 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555041 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555065 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555041 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555097 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555122 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555150 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555173 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555194 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555216 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555238 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555263 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555286 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555307 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555329 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555353 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555376 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555399 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555482 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555509 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555534 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555558 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555584 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555636 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555659 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555679 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555701 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555722 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555743 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555766 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555788 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555816 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555841 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555863 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555884 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555907 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555931 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555953 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556479 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556509 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556531 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556556 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556577 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556625 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556656 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556689 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556710 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556734 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556757 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556779 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556802 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556828 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556851 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556872 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556895 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556915 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556938 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556962 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556984 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557006 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557031 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557053 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557077 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557100 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557121 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557142 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557163 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557190 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557211 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557234 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557269 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557300 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557322 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557349 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557373 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557395 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557420 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557442 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557464 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557486 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557507 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557531 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557556 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558556 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558586 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558881 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558905 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558957 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558982 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559006 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559028 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559053 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559075 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559099 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559123 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559159 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559182 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559206 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559229 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560724 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560801 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560839 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560869 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560906 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560940 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560976 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561012 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561050 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561221 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561267 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561305 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561339 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561370 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561403 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561432 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561463 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561501 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561528 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561557 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561585 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561652 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561689 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561720 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561805 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561851 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561888 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561922 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561973 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562006 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562039 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562066 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562095 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562127 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562161 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562192 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562218 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562244 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562355 4909 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562375 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562393 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562409 4909 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562423 4909 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562441 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.565812 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.567935 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.569326 4909 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555328 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.569767 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555405 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555373 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555543 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555726 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.555788 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556047 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556027 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556083 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556109 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556169 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556282 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556325 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556431 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556624 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556749 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556703 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556819 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556818 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.556810 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557076 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557283 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557319 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557376 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557401 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557505 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557498 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557521 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557755 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557790 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557935 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558191 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558264 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558280 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558487 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.557016 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.558533 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559779 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.559860 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560626 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.560886 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561114 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561130 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561234 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561232 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561570 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.561632 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562128 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562339 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562365 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.562899 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.563146 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.563155 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.563240 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.563218 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.563783 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.563905 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.563978 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.564162 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.564456 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.564480 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.564497 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.564527 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.564583 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.564720 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.565577 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.565406 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.565852 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.565889 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.565897 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.565912 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.566000 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.566059 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.566321 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.566520 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.566922 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.567016 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.567038 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.571915 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.572203 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:49.071528685 +0000 UTC m=+21.217739891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.572907 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.573229 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.573244 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.573407 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.567417 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.567485 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.573937 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.573933 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.567974 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.568109 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.568108 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.568305 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.568946 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.569145 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.569346 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.569372 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.573363 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.573654 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.574933 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.575091 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.575137 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.567514 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.575783 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.576158 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.576394 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.578420 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.578296 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.578867 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.578879 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.579026 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.579247 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.579377 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.580080 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.580145 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.581025 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.585298 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.585804 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.585838 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.586458 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.586856 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.586987 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.587221 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.587270 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.587297 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.587314 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.587622 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.587582 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.587847 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.587944 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.588408 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.588802 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.588889 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.590333 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.590730 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.590794 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.590872 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.591368 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.591508 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.591681 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.591891 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.592094 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.592437 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.592730 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.593007 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.593284 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.593926 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.594915 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.595282 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.595320 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.595384 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.595668 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.595852 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:49.076355705 +0000 UTC m=+21.222566871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.595955 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.596147 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:49.096118796 +0000 UTC m=+21.242329962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.596626 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.596675 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.567181 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.597793 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.599508 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.599954 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:00:49.099924177 +0000 UTC m=+21.246135373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.600373 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.600537 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.600697 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.601147 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.602356 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.602652 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.603218 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.603187 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.603364 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.602499 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.604442 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.609066 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.610041 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.611150 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.611619 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.611700 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.611845 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.611994 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.612749 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.612830 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.613176 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.613190 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.613500 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.618782 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.619507 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.620725 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.623003 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.623230 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.624720 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.623463 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.623871 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.624090 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.624475 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.624657 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.624573 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.625430 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.625564 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:49.125545305 +0000 UTC m=+21.271756471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.626086 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.629036 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.629961 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.629975 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.630264 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.630413 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.630234 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.631011 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.631531 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.631698 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.631807 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.639896 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.640329 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.644561 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218" exitCode=255 Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.644755 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218"} Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.648975 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: E1126 07:00:48.652539 4909 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.652581 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.658182 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.658761 4909 scope.go:117] "RemoveContainer" containerID="4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663020 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663077 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663158 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663175 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663187 4909 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663201 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663215 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663229 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663241 4909 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663254 4909 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663267 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663309 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663322 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663336 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663180 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663349 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663362 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663375 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663387 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663317 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663402 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663469 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663489 4909 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663502 4909 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663514 4909 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663526 4909 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663538 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663551 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663563 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663575 4909 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663640 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663741 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663754 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663766 4909 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663803 4909 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663815 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663829 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663865 4909 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.663882 4909 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664077 4909 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664094 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664117 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664133 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664149 4909 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664165 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664180 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664196 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664082 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664226 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664247 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664266 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664286 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664304 4909 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664320 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664335 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664349 4909 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664364 4909 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664380 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664394 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664409 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664424 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664439 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664454 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664469 4909 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664498 4909 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664514 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664530 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664548 4909 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664564 4909 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664579 4909 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664630 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664647 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664662 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664677 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664692 4909 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664707 4909 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664725 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664740 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664835 4909 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.664856 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665058 4909 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665071 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665104 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665120 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665137 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665148 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665160 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665171 4909 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665183 4909 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665195 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665210 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665242 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665255 4909 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665266 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665278 4909 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665290 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665304 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665315 4909 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665328 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665338 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665351 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665362 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665374 4909 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665386 4909 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665397 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665409 4909 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665441 4909 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665454 4909 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665467 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665496 4909 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665547 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665559 4909 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665570 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665581 4909 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665613 4909 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665629 4909 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665646 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665659 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665673 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665686 4909 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665698 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665710 4909 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665721 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665734 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665746 4909 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665758 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665769 4909 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665781 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665793 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665805 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665816 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665830 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665841 4909 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665868 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665879 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665891 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665905 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665924 4909 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665939 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665956 4909 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665972 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665984 4909 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665997 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.665784 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666010 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666042 4909 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666057 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666070 4909 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666082 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666094 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666106 4909 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666119 4909 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666149 4909 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666162 4909 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666174 4909 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666185 4909 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666197 4909 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666208 4909 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666221 4909 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666233 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666244 4909 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666256 4909 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666267 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666280 4909 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666291 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666303 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666314 4909 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666325 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666337 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666349 4909 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666360 4909 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666372 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666385 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666465 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666477 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666489 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666502 4909 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666514 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666526 4909 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666538 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666549 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666561 4909 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666573 4909 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666584 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666620 4909 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666637 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666652 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666663 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666685 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666697 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666708 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666720 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.666732 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.678642 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.689919 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.699535 4909 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.703284 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.715737 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.733149 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.746238 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.757574 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.758187 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.767341 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.768486 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.771947 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.778249 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.784155 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: W1126 07:00:48.788815 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-2370f24a963fbfd991de2f809121accccf8a5564ce3ba1405277db215c042901 WatchSource:0}: Error finding container 2370f24a963fbfd991de2f809121accccf8a5564ce3ba1405277db215c042901: Status 404 returned error can't find the container with id 2370f24a963fbfd991de2f809121accccf8a5564ce3ba1405277db215c042901 Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.795185 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:48 crc kubenswrapper[4909]: W1126 07:00:48.800494 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5d0e03c9180b62ce75089fad2be463291046532b9e0cc2951cd08a9bd597b2ec WatchSource:0}: Error finding container 5d0e03c9180b62ce75089fad2be463291046532b9e0cc2951cd08a9bd597b2ec: Status 404 returned error can't find the container with id 5d0e03c9180b62ce75089fad2be463291046532b9e0cc2951cd08a9bd597b2ec Nov 26 07:00:48 crc kubenswrapper[4909]: I1126 07:00:48.805705 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.171706 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.171762 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.171784 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.171804 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.171823 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.171943 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.171957 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.171967 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172007 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:50.171992847 +0000 UTC m=+22.318204013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172312 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:00:50.172304846 +0000 UTC m=+22.318516012 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172347 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172368 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:50.172362948 +0000 UTC m=+22.318574114 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172404 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172416 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172423 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172441 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:50.17243595 +0000 UTC m=+22.318647116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172475 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: E1126 07:00:49.172494 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:50.172487831 +0000 UTC m=+22.318698997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.650363 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.652969 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4"} Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.653463 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.654041 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5d0e03c9180b62ce75089fad2be463291046532b9e0cc2951cd08a9bd597b2ec"} Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.656145 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c"} Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.656219 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41"} Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.656249 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2370f24a963fbfd991de2f809121accccf8a5564ce3ba1405277db215c042901"} Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.658649 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2"} Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.658706 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cb62a1ead941e01b7dfacf30bf4ff9af361a39083bea49520a483e10f83c5696"} Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.678569 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.706505 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.721050 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.734884 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.753687 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.770194 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.787091 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.806868 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.828886 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.843509 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.860764 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.877625 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.892115 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.909234 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.929223 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:49 crc kubenswrapper[4909]: I1126 07:00:49.946094 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:49Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.182920 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.183090 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183209 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:00:52.183159397 +0000 UTC m=+24.329370573 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183270 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183295 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183311 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.183333 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183398 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:52.183358852 +0000 UTC m=+24.329570018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.183429 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183451 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183516 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:52.183499696 +0000 UTC m=+24.329710862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.183455 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183582 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183740 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:52.183731493 +0000 UTC m=+24.329942659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183777 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183808 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.183830 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.184173 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:52.184153854 +0000 UTC m=+24.330365040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.498682 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.498763 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.498897 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.498915 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.499026 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:00:50 crc kubenswrapper[4909]: E1126 07:00:50.499099 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.503008 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.503871 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.505689 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.506690 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.508099 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.508846 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.509824 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.511250 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.512219 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.513639 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.514477 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.516314 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.517261 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.518143 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.519441 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.520376 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.521978 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.522679 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.524378 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.525259 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.525885 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.526655 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.527272 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.528163 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.528751 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.529548 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.530378 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.532815 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.533468 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.533985 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.534433 4909 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.534532 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.535859 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.536513 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.536922 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.538160 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.540288 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.541002 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.542015 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.542758 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.543257 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.543935 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.544646 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.545302 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.545807 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.546404 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.546959 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.547712 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.548208 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.549705 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.550570 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.551536 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.552392 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 26 07:00:50 crc kubenswrapper[4909]: I1126 07:00:50.553239 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.203514 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.203671 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.203846 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.203865 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.203878 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.203941 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.204235 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.204262 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204308 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:00:56.204291025 +0000 UTC m=+28.350502191 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204324 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:56.204317086 +0000 UTC m=+28.350528252 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204337 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:56.204330437 +0000 UTC m=+28.350541603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.204363 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204440 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204480 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:56.20447142 +0000 UTC m=+28.350682586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204523 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204548 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204565 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.204663 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:00:56.204633795 +0000 UTC m=+28.350845151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.498759 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.498917 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.499086 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.499130 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.499304 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:00:52 crc kubenswrapper[4909]: E1126 07:00:52.499563 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.677674 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66"} Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.700757 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.721872 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.739392 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.753052 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.773403 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.793007 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.809778 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:52 crc kubenswrapper[4909]: I1126 07:00:52.829098 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:53 crc kubenswrapper[4909]: I1126 07:00:53.946835 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pvgfr"] Nov 26 07:00:53 crc kubenswrapper[4909]: I1126 07:00:53.947080 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:53 crc kubenswrapper[4909]: I1126 07:00:53.952799 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 26 07:00:53 crc kubenswrapper[4909]: I1126 07:00:53.952719 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 26 07:00:53 crc kubenswrapper[4909]: I1126 07:00:53.953256 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.008265 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:53Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.022823 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f936ab99-34dc-455d-af35-8eb813a57065-hosts-file\") pod \"node-resolver-pvgfr\" (UID: \"f936ab99-34dc-455d-af35-8eb813a57065\") " pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.022857 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzv4w\" (UniqueName: \"kubernetes.io/projected/f936ab99-34dc-455d-af35-8eb813a57065-kube-api-access-zzv4w\") pod \"node-resolver-pvgfr\" (UID: \"f936ab99-34dc-455d-af35-8eb813a57065\") " pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.039001 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.053926 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.068998 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.077931 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.088982 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.100191 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.112896 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.123839 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f936ab99-34dc-455d-af35-8eb813a57065-hosts-file\") pod \"node-resolver-pvgfr\" (UID: \"f936ab99-34dc-455d-af35-8eb813a57065\") " pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.123875 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzv4w\" (UniqueName: \"kubernetes.io/projected/f936ab99-34dc-455d-af35-8eb813a57065-kube-api-access-zzv4w\") pod \"node-resolver-pvgfr\" (UID: \"f936ab99-34dc-455d-af35-8eb813a57065\") " pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.123974 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f936ab99-34dc-455d-af35-8eb813a57065-hosts-file\") pod \"node-resolver-pvgfr\" (UID: \"f936ab99-34dc-455d-af35-8eb813a57065\") " pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.124376 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.145450 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzv4w\" (UniqueName: \"kubernetes.io/projected/f936ab99-34dc-455d-af35-8eb813a57065-kube-api-access-zzv4w\") pod \"node-resolver-pvgfr\" (UID: \"f936ab99-34dc-455d-af35-8eb813a57065\") " pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.257398 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pvgfr" Nov 26 07:00:54 crc kubenswrapper[4909]: W1126 07:00:54.285774 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf936ab99_34dc_455d_af35_8eb813a57065.slice/crio-7fb6e72e7a143b4ec643c06f0f32b1261a5ae36050068ff8618f5c6f6dfb6561 WatchSource:0}: Error finding container 7fb6e72e7a143b4ec643c06f0f32b1261a5ae36050068ff8618f5c6f6dfb6561: Status 404 returned error can't find the container with id 7fb6e72e7a143b4ec643c06f0f32b1261a5ae36050068ff8618f5c6f6dfb6561 Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.347898 4909 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.349857 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.349897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.349906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.349970 4909 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.351022 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-4lffv"] Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.351388 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.351767 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6b4ts"] Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.352036 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.357828 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.359513 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78qth"] Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.359826 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.362283 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.363906 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.364230 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.364313 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.367282 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.367428 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f4bjn"] Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.367802 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.367845 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.367812 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.368099 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.369466 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.375772 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.375834 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.375845 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.375955 4909 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.376022 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.376049 4909 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.376131 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.376194 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.376262 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.376325 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.376482 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.377068 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.377115 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.377126 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.377146 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.377157 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.387122 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.392360 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.394971 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.395020 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.395035 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.395053 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.395115 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.399692 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.407335 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.414356 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.414357 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.414393 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.414403 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.414418 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.414429 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426221 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-env-overrides\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426260 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-script-lib\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426298 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/602939ce-1411-4a17-a42f-719afb7c6dd9-rootfs\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426314 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-var-lib-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426330 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-config\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426347 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovn-node-metrics-cert\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426363 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426385 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-systemd-units\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426402 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-netns\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426417 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426435 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3d586ea3-b189-476f-9e44-4579388f3107-multus-daemon-config\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426450 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc9zv\" (UniqueName: \"kubernetes.io/projected/602939ce-1411-4a17-a42f-719afb7c6dd9-kube-api-access-jc9zv\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426467 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-os-release\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426488 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-etc-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426505 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-log-socket\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426543 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-kubelet\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426559 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-conf-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426576 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-netd\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426611 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-system-cni-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426626 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-cnibin\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426643 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cni-binary-copy\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426666 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9k5k\" (UniqueName: \"kubernetes.io/projected/7869dc25-1c65-44bf-8a5a-6c1300c2d883-kube-api-access-h9k5k\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426687 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-node-log\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426717 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-systemd\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426741 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-ovn\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426762 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-netns\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426782 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-cni-multus\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426804 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426827 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426847 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cnibin\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426867 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-socket-dir-parent\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426887 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-k8s-cni-cncf-io\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426943 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426964 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-cni-bin\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.426985 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-multus-certs\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427006 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d586ea3-b189-476f-9e44-4579388f3107-cni-binary-copy\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427073 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-kubelet\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427191 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8scj\" (UniqueName: \"kubernetes.io/projected/bbfa11b9-2582-454a-9a97-63d505eccc8b-kube-api-access-s8scj\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427246 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-system-cni-dir\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427293 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-etc-kubernetes\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427332 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5ph\" (UniqueName: \"kubernetes.io/projected/3d586ea3-b189-476f-9e44-4579388f3107-kube-api-access-cv5ph\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427373 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-bin\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427395 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/602939ce-1411-4a17-a42f-719afb7c6dd9-mcd-auth-proxy-config\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427416 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-slash\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427463 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-os-release\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427499 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/602939ce-1411-4a17-a42f-719afb7c6dd9-proxy-tls\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427532 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-cni-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.427548 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-hostroot\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.428004 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.428067 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.430542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.430571 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.430580 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.430606 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.430617 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.437386 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.445238 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.448421 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.448447 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.448456 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.448468 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.448477 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.452391 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.459328 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.459430 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.461610 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.461648 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.461659 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.461674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.461684 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.465639 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.479350 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.490036 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.497987 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.498095 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.498111 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.497992 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.498237 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:00:54 crc kubenswrapper[4909]: E1126 07:00:54.498283 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.504558 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.520635 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.528486 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc9zv\" (UniqueName: \"kubernetes.io/projected/602939ce-1411-4a17-a42f-719afb7c6dd9-kube-api-access-jc9zv\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.528622 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-os-release\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.528710 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3d586ea3-b189-476f-9e44-4579388f3107-multus-daemon-config\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.528799 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-etc-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.528882 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-log-socket\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529050 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-kubelet\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529138 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-conf-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529208 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-kubelet\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529232 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-conf-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529199 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-os-release\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529167 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-log-socket\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529319 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-etc-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529366 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-node-log\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-node-log\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529533 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-netd\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529637 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-system-cni-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529718 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-cnibin\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529799 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cni-binary-copy\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529880 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9k5k\" (UniqueName: \"kubernetes.io/projected/7869dc25-1c65-44bf-8a5a-6c1300c2d883-kube-api-access-h9k5k\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529979 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-systemd\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530062 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-cnibin\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530074 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-ovn\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530143 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-netns\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530162 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-cni-multus\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530183 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530198 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530216 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-socket-dir-parent\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530232 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-k8s-cni-cncf-io\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530248 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cnibin\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530273 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530290 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-cni-bin\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530306 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-multus-certs\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530324 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-kubelet\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530341 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8scj\" (UniqueName: \"kubernetes.io/projected/bbfa11b9-2582-454a-9a97-63d505eccc8b-kube-api-access-s8scj\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530357 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d586ea3-b189-476f-9e44-4579388f3107-cni-binary-copy\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530373 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-system-cni-dir\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530389 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-bin\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530405 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-etc-kubernetes\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530420 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5ph\" (UniqueName: \"kubernetes.io/projected/3d586ea3-b189-476f-9e44-4579388f3107-kube-api-access-cv5ph\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530440 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/602939ce-1411-4a17-a42f-719afb7c6dd9-mcd-auth-proxy-config\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530455 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-slash\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530478 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-os-release\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530510 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/602939ce-1411-4a17-a42f-719afb7c6dd9-proxy-tls\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530525 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-cni-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530540 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-hostroot\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530566 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-env-overrides\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530604 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-script-lib\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530635 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/602939ce-1411-4a17-a42f-719afb7c6dd9-rootfs\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530656 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-var-lib-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530671 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-config\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530685 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovn-node-metrics-cert\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530704 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530722 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-systemd-units\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530740 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-netns\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530761 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530766 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-k8s-cni-cncf-io\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.529563 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-netd\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530837 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-bin\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530865 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-etc-kubernetes\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530953 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-system-cni-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530993 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-netns\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.531024 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-systemd\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.530006 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3d586ea3-b189-476f-9e44-4579388f3107-multus-daemon-config\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.531335 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-ovn\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.531450 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-hostroot\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.531519 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/602939ce-1411-4a17-a42f-719afb7c6dd9-mcd-auth-proxy-config\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.531634 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-slash\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.531749 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-os-release\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.531814 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-config\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532178 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-env-overrides\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532224 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-kubelet\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532401 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/602939ce-1411-4a17-a42f-719afb7c6dd9-rootfs\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532402 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-netns\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532427 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-cni-dir\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532457 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-cni-multus\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532474 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532767 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-multus-socket-dir-parent\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.532782 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cnibin\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533146 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533176 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-var-lib-openvswitch\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533198 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-run-multus-certs\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533221 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-systemd-units\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533221 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d586ea3-b189-476f-9e44-4579388f3107-cni-binary-copy\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533239 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d586ea3-b189-476f-9e44-4579388f3107-host-var-lib-cni-bin\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533254 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533275 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.533286 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-script-lib\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.535911 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovn-node-metrics-cert\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.537210 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cni-binary-copy\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.537294 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7869dc25-1c65-44bf-8a5a-6c1300c2d883-system-cni-dir\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.538084 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7869dc25-1c65-44bf-8a5a-6c1300c2d883-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.539443 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.545112 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc9zv\" (UniqueName: \"kubernetes.io/projected/602939ce-1411-4a17-a42f-719afb7c6dd9-kube-api-access-jc9zv\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.546037 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/602939ce-1411-4a17-a42f-719afb7c6dd9-proxy-tls\") pod \"machine-config-daemon-4lffv\" (UID: \"602939ce-1411-4a17-a42f-719afb7c6dd9\") " pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.551186 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5ph\" (UniqueName: \"kubernetes.io/projected/3d586ea3-b189-476f-9e44-4579388f3107-kube-api-access-cv5ph\") pod \"multus-6b4ts\" (UID: \"3d586ea3-b189-476f-9e44-4579388f3107\") " pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.551849 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8scj\" (UniqueName: \"kubernetes.io/projected/bbfa11b9-2582-454a-9a97-63d505eccc8b-kube-api-access-s8scj\") pod \"ovnkube-node-78qth\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.553448 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.562084 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9k5k\" (UniqueName: \"kubernetes.io/projected/7869dc25-1c65-44bf-8a5a-6c1300c2d883-kube-api-access-h9k5k\") pod \"multus-additional-cni-plugins-f4bjn\" (UID: \"7869dc25-1c65-44bf-8a5a-6c1300c2d883\") " pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.564220 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.564257 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.564270 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.564292 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.564305 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.567000 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.582370 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.601841 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.615723 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.640120 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.656235 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.667068 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.667112 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.667123 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.667140 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.667152 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.671793 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.679119 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.685188 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pvgfr" event={"ID":"f936ab99-34dc-455d-af35-8eb813a57065","Type":"ContainerStarted","Data":"b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.685315 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pvgfr" event={"ID":"f936ab99-34dc-455d-af35-8eb813a57065","Type":"ContainerStarted","Data":"7fb6e72e7a143b4ec643c06f0f32b1261a5ae36050068ff8618f5c6f6dfb6561"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.688547 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6b4ts" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.688531 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.696048 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.701680 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.702166 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.718570 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: W1126 07:00:54.719794 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbfa11b9_2582_454a_9a97_63d505eccc8b.slice/crio-052cea06e781cab69fe47aef87dfd12543446ec70651b0b66677e37c3391ee9b WatchSource:0}: Error finding container 052cea06e781cab69fe47aef87dfd12543446ec70651b0b66677e37c3391ee9b: Status 404 returned error can't find the container with id 052cea06e781cab69fe47aef87dfd12543446ec70651b0b66677e37c3391ee9b Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.743811 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.758894 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.775148 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.776213 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.776261 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.776321 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.776339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.776351 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.797903 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.816128 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.829635 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.845127 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.859556 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.875722 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.880365 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.880391 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.880399 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.880412 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.880420 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.897141 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.912620 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.925846 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.944434 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:54Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.982718 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.982767 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.982776 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.982790 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:54 crc kubenswrapper[4909]: I1126 07:00:54.982799 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:54Z","lastTransitionTime":"2025-11-26T07:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.085043 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.085085 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.085094 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.085110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.085120 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.187836 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.188232 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.188249 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.188274 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.188292 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.291242 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.291334 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.291353 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.291377 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.291395 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.394012 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.394056 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.394065 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.394084 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.394097 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.496521 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.496560 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.496571 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.496614 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.496628 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.600535 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.600576 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.600606 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.600625 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.600635 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.690780 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef" exitCode=0 Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.690872 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.690950 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"052cea06e781cab69fe47aef87dfd12543446ec70651b0b66677e37c3391ee9b"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.692582 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.692642 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.692657 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"f2edf38c1d85e590b13c140386723186473f3ca9f1898407c508683a3e40e475"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.693991 4909 generic.go:334] "Generic (PLEG): container finished" podID="7869dc25-1c65-44bf-8a5a-6c1300c2d883" containerID="c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e" exitCode=0 Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.694079 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerDied","Data":"c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.694122 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerStarted","Data":"33887d5d11a5a345ff7bc29114c61d3b1ee2a7a487f524f07d86029871980da0"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.696237 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerStarted","Data":"a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.696285 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerStarted","Data":"5e9b45fa92ec693a992fc0d8282ccc6c0755af5cb8d161dcb1025fce6c0421c9"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.702875 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.702907 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.702920 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.702942 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.702958 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.721965 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.739527 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.753474 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.768644 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.782415 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.795268 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.805662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.805752 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.805767 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.806254 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.806276 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.809506 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.821837 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.843706 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.855286 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.865747 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.876887 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.890568 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.908824 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.912487 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.912518 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.912527 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.912541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.912551 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:55Z","lastTransitionTime":"2025-11-26T07:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.920044 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.931095 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.943993 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.955604 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.966288 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.978127 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:55 crc kubenswrapper[4909]: I1126 07:00:55.991731 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:55Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.008684 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.014715 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.014767 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.014784 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.014806 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.014823 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.023089 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.035424 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.047398 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.061642 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.116467 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.116784 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.116878 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.116964 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.117036 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.220320 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.220354 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.220363 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.220380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.220390 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.248939 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.249047 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.249221 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.249273 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.249416 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.249674 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.249697 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.249711 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.249759 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:04.249742646 +0000 UTC m=+36.395953812 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250131 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:01:04.250119586 +0000 UTC m=+36.396330752 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250189 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250220 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:04.250210088 +0000 UTC m=+36.396421254 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250275 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250287 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250297 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250322 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:04.250314601 +0000 UTC m=+36.396525767 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250371 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.250400 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:04.250390123 +0000 UTC m=+36.396601289 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.323389 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.323486 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.323498 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.323515 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.323527 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.427618 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.427664 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.427677 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.427692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.427701 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.498172 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.498240 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.498305 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.498313 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.498386 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:00:56 crc kubenswrapper[4909]: E1126 07:00:56.498457 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.530275 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.530331 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.530349 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.530372 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.530389 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.633262 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.633289 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.633297 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.633311 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.633320 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.707025 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.707069 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.707082 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.707094 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.707510 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.707545 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.710699 4909 generic.go:334] "Generic (PLEG): container finished" podID="7869dc25-1c65-44bf-8a5a-6c1300c2d883" containerID="b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e" exitCode=0 Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.710729 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerDied","Data":"b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.725459 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.736396 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.736434 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.736443 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.736458 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.736467 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.738493 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.754661 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.768512 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.793164 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.806304 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.818201 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.831565 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.838738 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.838776 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.838789 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.838806 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.838817 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.843649 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.855371 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.865563 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.874510 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.886326 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.942064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.942207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.942217 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.942287 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:56 crc kubenswrapper[4909]: I1126 07:00:56.942299 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:56Z","lastTransitionTime":"2025-11-26T07:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.045670 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.045704 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.045712 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.045725 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.045734 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.153423 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.153472 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.153482 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.153518 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.153528 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.256346 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.256378 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.256386 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.256399 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.256407 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.359178 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.359251 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.359263 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.359285 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.359297 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.462099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.462128 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.462136 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.462148 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.462158 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.565077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.565750 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.565780 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.565805 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.565822 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.669482 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.669529 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.669542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.669558 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.669570 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.718206 4909 generic.go:334] "Generic (PLEG): container finished" podID="7869dc25-1c65-44bf-8a5a-6c1300c2d883" containerID="ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7" exitCode=0 Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.718258 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerDied","Data":"ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.739756 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.770510 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.773770 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.773854 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.773874 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.773978 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.773997 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.788734 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.810404 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.834575 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.859410 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.874709 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.876703 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.876741 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.876751 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.876766 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.876775 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.889585 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.907478 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.923498 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.934828 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.946260 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.959023 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:57Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.978697 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.978732 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.978741 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.978756 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:57 crc kubenswrapper[4909]: I1126 07:00:57.978765 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:57Z","lastTransitionTime":"2025-11-26T07:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.046554 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-snbtv"] Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.046910 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.049647 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.049901 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.049984 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.050181 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.063491 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.077153 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.081047 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.081099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.081113 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.081133 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.081147 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.092644 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.109875 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.124990 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.146201 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.164013 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.167791 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94r7x\" (UniqueName: \"kubernetes.io/projected/c374d623-8f62-4336-a493-7a07dabe5fa3-kube-api-access-94r7x\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.167843 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c374d623-8f62-4336-a493-7a07dabe5fa3-host\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.167861 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c374d623-8f62-4336-a493-7a07dabe5fa3-serviceca\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.173939 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.184325 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.184380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.184390 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.184409 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.184420 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.190279 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.206653 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.220294 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.232363 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.243401 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.270109 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94r7x\" (UniqueName: \"kubernetes.io/projected/c374d623-8f62-4336-a493-7a07dabe5fa3-kube-api-access-94r7x\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.270188 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c374d623-8f62-4336-a493-7a07dabe5fa3-host\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.270224 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c374d623-8f62-4336-a493-7a07dabe5fa3-serviceca\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.270614 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c374d623-8f62-4336-a493-7a07dabe5fa3-host\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.271648 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c374d623-8f62-4336-a493-7a07dabe5fa3-serviceca\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.272275 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.286898 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.286948 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.286964 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.286985 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.287001 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.299993 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94r7x\" (UniqueName: \"kubernetes.io/projected/c374d623-8f62-4336-a493-7a07dabe5fa3-kube-api-access-94r7x\") pod \"node-ca-snbtv\" (UID: \"c374d623-8f62-4336-a493-7a07dabe5fa3\") " pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.389545 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.389613 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.389625 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.389642 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.389679 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.445108 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-snbtv" Nov 26 07:00:58 crc kubenswrapper[4909]: W1126 07:00:58.465333 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc374d623_8f62_4336_a493_7a07dabe5fa3.slice/crio-4a1e01d6083801ada9aa46e3d32f3c3d52e057701d22170364d10e6b1b5a05db WatchSource:0}: Error finding container 4a1e01d6083801ada9aa46e3d32f3c3d52e057701d22170364d10e6b1b5a05db: Status 404 returned error can't find the container with id 4a1e01d6083801ada9aa46e3d32f3c3d52e057701d22170364d10e6b1b5a05db Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.492323 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.492382 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.492406 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.492434 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.492457 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.500232 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:00:58 crc kubenswrapper[4909]: E1126 07:00:58.500350 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.500714 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:00:58 crc kubenswrapper[4909]: E1126 07:00:58.500791 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.501019 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:00:58 crc kubenswrapper[4909]: E1126 07:00:58.501170 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.517751 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.529310 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.544353 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.558102 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.571737 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.588505 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.594200 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.594234 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.594245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.594262 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.594278 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.605507 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.624791 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.642170 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.654720 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.670877 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.685970 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.696137 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.696176 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.696190 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.696223 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.696234 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.706958 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.717394 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.723210 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-snbtv" event={"ID":"c374d623-8f62-4336-a493-7a07dabe5fa3","Type":"ContainerStarted","Data":"4a1e01d6083801ada9aa46e3d32f3c3d52e057701d22170364d10e6b1b5a05db"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.726859 4909 generic.go:334] "Generic (PLEG): container finished" podID="7869dc25-1c65-44bf-8a5a-6c1300c2d883" containerID="5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1" exitCode=0 Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.726910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerDied","Data":"5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.743323 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.758885 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.781079 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.800256 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.800305 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.800317 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.800335 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.800347 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.801767 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.817764 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.831910 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.846191 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.865924 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.880046 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.890924 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.902427 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.903650 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.903684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.903696 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.903715 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.903726 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:58Z","lastTransitionTime":"2025-11-26T07:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.915753 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.926813 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:58 crc kubenswrapper[4909]: I1126 07:00:58.937844 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.006058 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.006100 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.006112 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.006129 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.006142 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.109642 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.109686 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.109697 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.109713 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.109724 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.213685 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.213740 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.213755 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.213774 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.213787 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.316041 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.316067 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.316077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.316092 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.316102 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.418967 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.419025 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.419037 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.419073 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.419088 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.521978 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.522026 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.522039 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.522055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.522067 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.625228 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.625275 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.625287 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.625305 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.625321 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.728051 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.728098 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.728110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.728129 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.728142 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.732172 4909 generic.go:334] "Generic (PLEG): container finished" podID="7869dc25-1c65-44bf-8a5a-6c1300c2d883" containerID="8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81" exitCode=0 Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.732236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerDied","Data":"8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.741653 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.742760 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-snbtv" event={"ID":"c374d623-8f62-4336-a493-7a07dabe5fa3","Type":"ContainerStarted","Data":"e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.751037 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.765924 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.776361 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.789583 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.805226 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.819677 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.834038 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.835132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.835206 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.835218 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.835238 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.835253 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.848309 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.860257 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.871189 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.883403 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.908322 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.919784 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.935403 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.937395 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.937421 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.937430 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.937442 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.937451 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:00:59Z","lastTransitionTime":"2025-11-26T07:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.948541 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.961308 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.974170 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:00:59 crc kubenswrapper[4909]: I1126 07:00:59.987313 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:00:59Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.002288 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.016059 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.037498 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.039799 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.039837 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.039848 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.039862 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.039876 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.052683 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.073289 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.088370 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.102092 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.131529 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.143287 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.143354 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.143365 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.143382 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.143395 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.148783 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.164446 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.246257 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.246316 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.246336 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.246360 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.246377 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.349259 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.349324 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.349348 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.349380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.349402 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.452806 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.452881 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.452906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.452936 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.452960 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.498778 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.498816 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.498804 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:00 crc kubenswrapper[4909]: E1126 07:01:00.498991 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:00 crc kubenswrapper[4909]: E1126 07:01:00.499119 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:00 crc kubenswrapper[4909]: E1126 07:01:00.499263 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.556388 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.556420 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.556430 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.556445 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.556456 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.659395 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.659447 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.659507 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.659531 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.659555 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.751669 4909 generic.go:334] "Generic (PLEG): container finished" podID="7869dc25-1c65-44bf-8a5a-6c1300c2d883" containerID="5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b" exitCode=0 Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.752169 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerDied","Data":"5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.762153 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.762224 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.762261 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.762298 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.762324 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.775465 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.802951 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.816526 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.829125 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.845611 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.863945 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.865222 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.865273 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.865296 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.865325 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.865348 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.879392 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.897352 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.909560 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.923296 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.940879 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.953075 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.965762 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.967429 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.967458 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.967470 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.967522 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.967534 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:00Z","lastTransitionTime":"2025-11-26T07:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:00 crc kubenswrapper[4909]: I1126 07:01:00.978801 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:00Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.071131 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.071185 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.071207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.071237 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.071261 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.174090 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.174132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.174142 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.174157 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.174166 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.277077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.277119 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.277128 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.277142 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.277151 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.380700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.380757 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.380835 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.380860 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.380876 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.483725 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.483787 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.483827 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.483859 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.483882 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.586581 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.586647 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.586663 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.586714 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.586729 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.689782 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.689867 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.689903 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.689937 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.689960 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.762567 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" event={"ID":"7869dc25-1c65-44bf-8a5a-6c1300c2d883","Type":"ContainerStarted","Data":"f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.769651 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.770098 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.770166 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.780676 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.793346 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.793391 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.793405 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.793426 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.793474 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.797670 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.806475 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.806843 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.816482 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.836632 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.855212 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.869183 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.886699 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.897032 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.897276 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.897289 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.897306 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.897326 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.901220 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.921088 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.931645 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.943390 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.953264 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.964503 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.976389 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.988299 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.999801 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.999830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:01 crc kubenswrapper[4909]: I1126 07:01:01.999839 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:01.999855 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:01.999865 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:01Z","lastTransitionTime":"2025-11-26T07:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.000195 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:01Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.012021 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.019476 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.029178 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.037747 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.054789 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.065335 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.075067 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.086945 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.098041 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.101951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.101999 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.102012 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.102030 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.102042 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.109419 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.117476 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.131274 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:02Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.204919 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.204970 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.204979 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.204999 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.205012 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.307800 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.307855 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.307868 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.307885 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.307899 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.410960 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.411014 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.411025 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.411040 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.411053 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.498835 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.498916 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:02 crc kubenswrapper[4909]: E1126 07:01:02.499019 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:02 crc kubenswrapper[4909]: E1126 07:01:02.499126 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.499211 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:02 crc kubenswrapper[4909]: E1126 07:01:02.499552 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.513933 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.514290 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.514423 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.514554 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.514705 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.617897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.617949 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.617964 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.617984 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.617996 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.721165 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.721241 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.721260 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.721283 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.721303 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.773539 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.824278 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.829050 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.829168 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.829273 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.829343 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.932385 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.932448 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.932458 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.932482 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:02 crc kubenswrapper[4909]: I1126 07:01:02.932496 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:02Z","lastTransitionTime":"2025-11-26T07:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.035925 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.035978 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.035996 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.036018 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.036037 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.138625 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.138662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.138674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.138692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.138703 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.241023 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.241080 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.241094 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.241115 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.241130 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.344581 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.344641 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.344652 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.344672 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.344684 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.447549 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.447623 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.447633 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.447663 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.447676 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.551024 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.551076 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.551090 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.551107 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.551118 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.654730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.654810 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.654830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.654858 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.654878 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.757665 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.757715 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.757724 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.757740 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.757749 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.776883 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.860677 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.860737 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.860751 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.860772 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.860785 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.963909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.963960 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.963974 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.963997 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:03 crc kubenswrapper[4909]: I1126 07:01:03.964012 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:03Z","lastTransitionTime":"2025-11-26T07:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.068075 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.068158 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.068190 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.068235 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.068248 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.171481 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.171533 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.171552 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.171580 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.171674 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.274538 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.274614 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.274626 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.274648 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.274662 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.332545 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.332813 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.332896 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.332942 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.332984 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333066 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:01:20.333035642 +0000 UTC m=+52.479246818 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333104 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333136 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333156 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333184 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333210 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:20.333193316 +0000 UTC m=+52.479404492 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333220 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333233 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:20.333222327 +0000 UTC m=+52.479433613 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333136 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333245 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333267 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333274 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:20.333265758 +0000 UTC m=+52.479477044 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.333321 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:20.333303109 +0000 UTC m=+52.479514315 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.378395 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.378471 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.378490 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.378526 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.378554 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.482401 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.482479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.482501 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.482532 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.482552 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.498977 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.499028 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.499068 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.499215 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.499382 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.499529 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.586197 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.586253 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.586264 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.586285 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.586665 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.621328 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.621410 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.621431 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.621459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.621480 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.645207 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.650337 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.650398 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.650416 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.650449 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.650470 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.672187 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.677984 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.678064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.678088 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.678118 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.678141 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.696681 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.704788 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.704831 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.704840 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.704860 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.704875 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.727251 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.733021 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.733076 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.733088 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.733107 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.733121 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.751485 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: E1126 07:01:04.751663 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.753977 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.754015 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.754030 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.754049 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.754061 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.783172 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/0.log" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.786174 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c" exitCode=1 Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.786215 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.786850 4909 scope.go:117] "RemoveContainer" containerID="47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.806093 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.828376 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.856910 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.860341 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.860396 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.860408 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.860432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.860447 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.880921 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.899279 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.910381 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.921163 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.930802 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.955943 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:03Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI1126 07:01:03.866889 6229 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 07:01:03.866917 6229 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 07:01:03.866941 6229 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 07:01:03.866953 6229 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 07:01:03.866978 6229 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:03.866993 6229 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:03.866654 6229 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 07:01:03.867023 6229 factory.go:656] Stopping watch factory\\\\nI1126 07:01:03.867060 6229 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:03.867081 6229 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:03.867098 6229 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 07:01:03.867112 6229 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 07:01:03.867126 6229 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 07:01:03.867140 6229 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:03.867158 6229 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.964554 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.964574 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.964582 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.964611 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.964620 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:04Z","lastTransitionTime":"2025-11-26T07:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.967550 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.979277 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:04 crc kubenswrapper[4909]: I1126 07:01:04.990498 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:04Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.002759 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.021941 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.035888 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.047113 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.058941 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.067116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.067150 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.067160 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.067175 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.067185 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.074463 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.086170 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.095431 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.109287 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.122663 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.136128 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.153643 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.167679 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.169819 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.170027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.170184 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.170328 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.170458 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.186011 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.201646 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.225970 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.254039 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:03Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI1126 07:01:03.866889 6229 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 07:01:03.866917 6229 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 07:01:03.866941 6229 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 07:01:03.866953 6229 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 07:01:03.866978 6229 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:03.866993 6229 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:03.866654 6229 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 07:01:03.867023 6229 factory.go:656] Stopping watch factory\\\\nI1126 07:01:03.867060 6229 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:03.867081 6229 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:03.867098 6229 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 07:01:03.867112 6229 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 07:01:03.867126 6229 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 07:01:03.867140 6229 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:03.867158 6229 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.273430 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.273478 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.273491 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.273514 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.273531 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.375950 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.376707 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.376822 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.376918 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.377011 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.479428 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.479470 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.479482 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.479499 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.479511 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.582088 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.582118 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.582125 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.582138 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.582147 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.684409 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.684481 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.684505 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.684535 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.684556 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.786411 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.786446 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.786455 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.786468 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.786480 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.790257 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/0.log" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.793039 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.793304 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.813435 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.829047 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.838464 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.846747 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.858898 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.872527 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.886503 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.888571 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.888625 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.888637 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.888653 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.888665 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.897550 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.907673 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.917675 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.934108 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:03Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI1126 07:01:03.866889 6229 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 07:01:03.866917 6229 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 07:01:03.866941 6229 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 07:01:03.866953 6229 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 07:01:03.866978 6229 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:03.866993 6229 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:03.866654 6229 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 07:01:03.867023 6229 factory.go:656] Stopping watch factory\\\\nI1126 07:01:03.867060 6229 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:03.867081 6229 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:03.867098 6229 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 07:01:03.867112 6229 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 07:01:03.867126 6229 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 07:01:03.867140 6229 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:03.867158 6229 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.944805 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.956907 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.973229 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:05Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.991331 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.991381 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.991393 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.991409 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:05 crc kubenswrapper[4909]: I1126 07:01:05.991421 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:05Z","lastTransitionTime":"2025-11-26T07:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.094141 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.094244 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.094270 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.094301 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.094321 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.197231 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.197282 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.197293 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.197311 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.197324 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.299513 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.299565 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.299579 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.299631 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.299649 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.309879 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.402553 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.402604 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.402615 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.402631 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.402643 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.466418 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb"] Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.467174 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.470539 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.470908 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.486573 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.499014 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.499059 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.499015 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:06 crc kubenswrapper[4909]: E1126 07:01:06.499283 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:06 crc kubenswrapper[4909]: E1126 07:01:06.499377 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:06 crc kubenswrapper[4909]: E1126 07:01:06.499516 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.503761 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.504404 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.504473 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.504497 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.504523 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.504547 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.515907 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.539539 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:03Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI1126 07:01:03.866889 6229 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 07:01:03.866917 6229 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 07:01:03.866941 6229 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 07:01:03.866953 6229 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 07:01:03.866978 6229 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:03.866993 6229 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:03.866654 6229 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 07:01:03.867023 6229 factory.go:656] Stopping watch factory\\\\nI1126 07:01:03.867060 6229 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:03.867081 6229 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:03.867098 6229 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 07:01:03.867112 6229 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 07:01:03.867126 6229 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 07:01:03.867140 6229 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:03.867158 6229 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.553341 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.566574 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpqf\" (UniqueName: \"kubernetes.io/projected/87c66ecb-cdba-4731-9be5-55df0eb28303-kube-api-access-vkpqf\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.566754 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87c66ecb-cdba-4731-9be5-55df0eb28303-env-overrides\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.566832 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87c66ecb-cdba-4731-9be5-55df0eb28303-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.566905 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87c66ecb-cdba-4731-9be5-55df0eb28303-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.567520 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.580312 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.595464 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.607556 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.607632 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.607647 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.607669 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.607683 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.611189 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.626546 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.637331 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.651132 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.662038 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.668126 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpqf\" (UniqueName: \"kubernetes.io/projected/87c66ecb-cdba-4731-9be5-55df0eb28303-kube-api-access-vkpqf\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.668166 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87c66ecb-cdba-4731-9be5-55df0eb28303-env-overrides\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.668185 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87c66ecb-cdba-4731-9be5-55df0eb28303-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.668209 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87c66ecb-cdba-4731-9be5-55df0eb28303-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.669084 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87c66ecb-cdba-4731-9be5-55df0eb28303-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.669165 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87c66ecb-cdba-4731-9be5-55df0eb28303-env-overrides\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.673547 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87c66ecb-cdba-4731-9be5-55df0eb28303-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.677384 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.689176 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpqf\" (UniqueName: \"kubernetes.io/projected/87c66ecb-cdba-4731-9be5-55df0eb28303-kube-api-access-vkpqf\") pod \"ovnkube-control-plane-749d76644c-52cfb\" (UID: \"87c66ecb-cdba-4731-9be5-55df0eb28303\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.692743 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.709875 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.709919 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.709931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.709948 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.709960 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.789157 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.798872 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/1.log" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.799804 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/0.log" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.805139 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688" exitCode=1 Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.805205 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.805268 4909 scope.go:117] "RemoveContainer" containerID="47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c" Nov 26 07:01:06 crc kubenswrapper[4909]: W1126 07:01:06.805942 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87c66ecb_cdba_4731_9be5_55df0eb28303.slice/crio-ad80c5028ccb6a706ed7fec25296638eb33e7d98e00711528eabadcb104dff66 WatchSource:0}: Error finding container ad80c5028ccb6a706ed7fec25296638eb33e7d98e00711528eabadcb104dff66: Status 404 returned error can't find the container with id ad80c5028ccb6a706ed7fec25296638eb33e7d98e00711528eabadcb104dff66 Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.806388 4909 scope.go:117] "RemoveContainer" containerID="5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688" Nov 26 07:01:06 crc kubenswrapper[4909]: E1126 07:01:06.806766 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.818882 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.818950 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.818974 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.819003 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.819027 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.828718 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.840715 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.853108 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.864409 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.876280 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.885190 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.895928 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.908579 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.918540 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.921146 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.921174 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.921185 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.921198 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.921208 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:06Z","lastTransitionTime":"2025-11-26T07:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.933938 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.954530 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:03Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI1126 07:01:03.866889 6229 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 07:01:03.866917 6229 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 07:01:03.866941 6229 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 07:01:03.866953 6229 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 07:01:03.866978 6229 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:03.866993 6229 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:03.866654 6229 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 07:01:03.867023 6229 factory.go:656] Stopping watch factory\\\\nI1126 07:01:03.867060 6229 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:03.867081 6229 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:03.867098 6229 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 07:01:03.867112 6229 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 07:01:03.867126 6229 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 07:01:03.867140 6229 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:03.867158 6229 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.967939 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.980058 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:06 crc kubenswrapper[4909]: I1126 07:01:06.993701 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.005252 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.024087 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.024155 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.024184 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.024219 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.024242 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.126387 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.126449 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.126467 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.126495 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.126511 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.237623 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.237689 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.237702 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.237729 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.237742 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.340692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.340744 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.340761 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.340782 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.340797 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.443610 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.443657 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.443674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.443695 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.443707 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.545624 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.545726 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.545741 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.545768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.545789 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.647879 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.647921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.647930 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.647943 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.647953 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.750684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.750753 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.750768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.750790 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.750802 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.810438 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" event={"ID":"87c66ecb-cdba-4731-9be5-55df0eb28303","Type":"ContainerStarted","Data":"7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.810515 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" event={"ID":"87c66ecb-cdba-4731-9be5-55df0eb28303","Type":"ContainerStarted","Data":"81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.810540 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" event={"ID":"87c66ecb-cdba-4731-9be5-55df0eb28303","Type":"ContainerStarted","Data":"ad80c5028ccb6a706ed7fec25296638eb33e7d98e00711528eabadcb104dff66"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.812331 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/1.log" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.816344 4909 scope.go:117] "RemoveContainer" containerID="5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688" Nov 26 07:01:07 crc kubenswrapper[4909]: E1126 07:01:07.816524 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.828881 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.849019 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.853017 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.853055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.853062 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.853077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.853087 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.864799 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.879615 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.898890 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.911348 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.932520 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47782598affc1c6f3945eb4f209d0b48333e60a0f44bc2730d2856e5a953035c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:03Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI1126 07:01:03.866889 6229 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 07:01:03.866917 6229 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 07:01:03.866941 6229 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 07:01:03.866953 6229 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 07:01:03.866978 6229 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:03.866993 6229 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:03.866654 6229 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 07:01:03.867023 6229 factory.go:656] Stopping watch factory\\\\nI1126 07:01:03.867060 6229 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:03.867081 6229 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:03.867098 6229 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 07:01:03.867112 6229 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 07:01:03.867126 6229 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 07:01:03.867140 6229 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:03.867158 6229 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.947985 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.952023 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8llwb"] Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.952432 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:07 crc kubenswrapper[4909]: E1126 07:01:07.952498 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.954768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.954793 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.954801 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.954814 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.954824 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:07Z","lastTransitionTime":"2025-11-26T07:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.961257 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.971465 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.984075 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:07 crc kubenswrapper[4909]: I1126 07:01:07.995500 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:07Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.007658 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.016393 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.025737 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.041572 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.055374 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.057238 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.057279 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.057291 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.057308 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.057317 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.064268 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.076752 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.087098 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mmf9\" (UniqueName: \"kubernetes.io/projected/6e91888f-077f-4be0-a258-568bde5c10bd-kube-api-access-2mmf9\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.087148 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.088505 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.098033 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.110106 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.119843 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.159951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.160004 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.160017 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.160040 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.160055 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.160419 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.185474 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.188049 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mmf9\" (UniqueName: \"kubernetes.io/projected/6e91888f-077f-4be0-a258-568bde5c10bd-kube-api-access-2mmf9\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.188476 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:08 crc kubenswrapper[4909]: E1126 07:01:08.188603 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:08 crc kubenswrapper[4909]: E1126 07:01:08.188666 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:01:08.688649266 +0000 UTC m=+40.834860432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.209163 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.209503 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mmf9\" (UniqueName: \"kubernetes.io/projected/6e91888f-077f-4be0-a258-568bde5c10bd-kube-api-access-2mmf9\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.219417 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.229744 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.240294 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.251625 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.259506 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.263551 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.263631 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.263646 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.263667 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.263680 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.367008 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.367073 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.367084 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.367103 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.367118 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.470537 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.470620 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.470636 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.470659 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.470678 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.498140 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.498263 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:08 crc kubenswrapper[4909]: E1126 07:01:08.498287 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.498400 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:08 crc kubenswrapper[4909]: E1126 07:01:08.498454 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:08 crc kubenswrapper[4909]: E1126 07:01:08.498682 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.519644 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.540681 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.554054 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.573049 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.573122 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.573145 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.573170 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.573189 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.574870 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.588095 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.602421 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.614899 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.625438 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.640069 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.663890 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.675152 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.675195 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.675208 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.675225 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.675240 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.678339 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.691905 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.693972 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:08 crc kubenswrapper[4909]: E1126 07:01:08.694170 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:08 crc kubenswrapper[4909]: E1126 07:01:08.694249 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:01:09.6942265 +0000 UTC m=+41.840437666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.702473 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.713911 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.729235 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.739280 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.777472 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.777506 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.777514 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.777530 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.777540 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.880822 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.880878 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.880894 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.880914 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.880982 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.984083 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.984172 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.984191 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.984212 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:08 crc kubenswrapper[4909]: I1126 07:01:08.984227 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:08Z","lastTransitionTime":"2025-11-26T07:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.087146 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.087187 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.087196 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.087211 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.087221 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.190680 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.190726 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.190736 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.190753 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.190763 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.293963 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.294463 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.294573 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.294914 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.295010 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.398477 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.398531 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.398548 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.398620 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.398638 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.497993 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:09 crc kubenswrapper[4909]: E1126 07:01:09.498143 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.502117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.502152 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.502165 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.502181 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.502192 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.604801 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.604882 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.604897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.604923 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.604951 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.704096 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:09 crc kubenswrapper[4909]: E1126 07:01:09.704281 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:09 crc kubenswrapper[4909]: E1126 07:01:09.704379 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:01:11.704357982 +0000 UTC m=+43.850569158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.707924 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.707982 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.708005 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.708027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.708041 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.810855 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.810892 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.810903 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.810918 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.810930 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.913540 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.913571 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.913579 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.913621 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:09 crc kubenswrapper[4909]: I1126 07:01:09.913638 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:09Z","lastTransitionTime":"2025-11-26T07:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.015753 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.015798 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.015808 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.015823 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.015834 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.118203 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.118255 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.118266 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.118282 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.118293 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.221008 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.221055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.221069 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.221090 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.221106 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.323221 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.323275 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.323290 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.323312 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.323326 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.426718 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.426777 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.426791 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.426811 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.426823 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.498853 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.499007 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:10 crc kubenswrapper[4909]: E1126 07:01:10.499055 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.499158 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:10 crc kubenswrapper[4909]: E1126 07:01:10.499217 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:10 crc kubenswrapper[4909]: E1126 07:01:10.499309 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.530108 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.530149 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.530160 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.530179 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.530191 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.638990 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.639039 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.639051 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.639069 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.639081 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.741542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.741609 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.741623 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.741643 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.741657 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.844339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.844401 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.844418 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.844440 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.844456 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.947577 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.947662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.947674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.947692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:10 crc kubenswrapper[4909]: I1126 07:01:10.947705 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:10Z","lastTransitionTime":"2025-11-26T07:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.051204 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.051294 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.051318 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.051347 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.051368 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.154381 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.154437 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.154449 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.154469 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.154483 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.257259 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.257313 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.257329 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.257354 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.257371 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.360669 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.360730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.360744 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.360768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.360782 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.463558 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.463648 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.463662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.463678 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.463688 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.498659 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:11 crc kubenswrapper[4909]: E1126 07:01:11.498863 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.567376 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.567422 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.567440 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.567460 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.567474 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.671671 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.671742 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.671759 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.671784 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.671826 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.727479 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:11 crc kubenswrapper[4909]: E1126 07:01:11.727674 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:11 crc kubenswrapper[4909]: E1126 07:01:11.727748 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:01:15.72772549 +0000 UTC m=+47.873936656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.775665 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.775718 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.775730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.775750 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.775760 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.878021 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.878068 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.878085 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.878108 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.878123 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.981473 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.981542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.981553 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.981574 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:11 crc kubenswrapper[4909]: I1126 07:01:11.981587 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:11Z","lastTransitionTime":"2025-11-26T07:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.084560 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.084628 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.084643 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.084674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.084688 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.187090 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.187150 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.187163 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.187184 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.187199 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.290344 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.290386 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.290399 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.290417 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.290429 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.392931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.392993 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.393006 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.393025 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.393037 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.496219 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.496265 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.496281 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.496300 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.496314 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.498842 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.498939 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.498851 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:12 crc kubenswrapper[4909]: E1126 07:01:12.499052 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:12 crc kubenswrapper[4909]: E1126 07:01:12.499173 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:12 crc kubenswrapper[4909]: E1126 07:01:12.499267 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.600276 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.600364 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.600390 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.600423 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.600447 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.703198 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.703279 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.703300 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.703325 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.703343 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.811460 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.811520 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.811535 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.811558 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.811574 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.914327 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.914369 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.914380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.914395 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:12 crc kubenswrapper[4909]: I1126 07:01:12.914405 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:12Z","lastTransitionTime":"2025-11-26T07:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.019078 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.019139 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.019160 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.019188 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.019208 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.122198 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.122232 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.122244 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.122260 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.122272 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.224552 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.224662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.224679 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.224692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.224701 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.327564 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.327696 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.327724 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.327760 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.327785 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.430747 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.430825 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.430846 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.430870 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.430885 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.498177 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:13 crc kubenswrapper[4909]: E1126 07:01:13.498364 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.533780 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.533846 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.533860 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.533883 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.533902 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.637243 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.637293 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.637306 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.637326 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.637342 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.739725 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.739759 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.739768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.739784 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.739797 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.843435 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.843506 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.843520 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.843541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.843552 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.946797 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.946937 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.946961 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.946990 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:13 crc kubenswrapper[4909]: I1126 07:01:13.947013 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:13Z","lastTransitionTime":"2025-11-26T07:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.049789 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.049855 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.049879 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.049909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.049934 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.153189 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.153259 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.153283 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.153312 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.153336 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.255264 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.255369 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.255387 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.255419 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.255438 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.358245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.358309 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.358332 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.358361 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.358383 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.461227 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.461320 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.461347 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.461379 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.461403 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.498788 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.498843 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.498964 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.499165 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.499332 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.499495 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.565739 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.565801 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.565820 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.565843 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.565861 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.668723 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.668788 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.668811 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.668842 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.668863 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.771931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.771998 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.772014 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.772038 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.772055 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.838832 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.838890 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.838906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.838926 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.838942 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.861920 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:14Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.866616 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.866653 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.866663 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.866678 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.866688 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.883562 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:14Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.888615 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.888857 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.888921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.889009 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.889077 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.906489 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:14Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.911447 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.911526 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.911540 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.911558 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.911619 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.929294 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:14Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.934380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.934416 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.934426 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.934440 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.934450 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.950958 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:14Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:14 crc kubenswrapper[4909]: E1126 07:01:14.951136 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.953748 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.953787 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.953800 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.953819 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:14 crc kubenswrapper[4909]: I1126 07:01:14.953833 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:14Z","lastTransitionTime":"2025-11-26T07:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.056569 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.056656 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.056675 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.056694 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.056710 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.159627 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.159675 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.159687 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.159707 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.159719 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.262653 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.262707 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.262717 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.262730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.262739 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.365334 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.365394 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.365413 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.365436 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.365454 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.468129 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.468177 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.468193 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.468215 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.468230 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.497908 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:15 crc kubenswrapper[4909]: E1126 07:01:15.498089 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.572055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.572126 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.572141 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.572163 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.572178 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.674665 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.674753 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.674786 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.674813 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.674833 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.774503 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:15 crc kubenswrapper[4909]: E1126 07:01:15.774805 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:15 crc kubenswrapper[4909]: E1126 07:01:15.774971 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:01:23.774931448 +0000 UTC m=+55.921142774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.777560 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.777670 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.777682 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.777738 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.777759 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.880960 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.881006 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.881014 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.881029 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.881040 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.983939 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.984018 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.984035 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.984061 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:15 crc kubenswrapper[4909]: I1126 07:01:15.984081 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:15Z","lastTransitionTime":"2025-11-26T07:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.087455 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.087520 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.087537 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.087560 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.087577 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.190400 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.190448 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.190459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.190477 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.190492 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.293659 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.293709 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.293727 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.293751 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.293769 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.395932 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.395977 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.395988 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.396005 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.396020 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498057 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498118 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498067 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:16 crc kubenswrapper[4909]: E1126 07:01:16.498252 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498314 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498349 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498372 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498402 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.498425 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: E1126 07:01:16.498481 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:16 crc kubenswrapper[4909]: E1126 07:01:16.498393 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.601994 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.602061 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.602088 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.602116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.602138 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.704873 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.704917 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.704933 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.704951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.704964 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.807866 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.807902 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.807916 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.807934 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.807947 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.923285 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.923365 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.923391 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.923432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:16 crc kubenswrapper[4909]: I1126 07:01:16.923530 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:16Z","lastTransitionTime":"2025-11-26T07:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.026411 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.026445 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.026454 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.026468 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.026477 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.128962 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.129012 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.129025 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.129044 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.129058 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.232032 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.232091 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.232109 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.232132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.232149 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.335400 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.335447 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.335459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.335474 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.335486 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.438676 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.438775 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.438833 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.438930 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.439040 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.498354 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:17 crc kubenswrapper[4909]: E1126 07:01:17.498689 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.542790 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.542847 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.542871 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.542903 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.542928 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.646061 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.646115 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.646135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.646158 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.646175 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.749217 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.749283 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.749301 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.749326 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.749343 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.851861 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.851903 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.851913 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.851927 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.851940 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.954484 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.954578 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.954629 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.954666 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:17 crc kubenswrapper[4909]: I1126 07:01:17.954685 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:17Z","lastTransitionTime":"2025-11-26T07:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.057948 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.058001 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.058016 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.058037 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.058052 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.161414 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.161487 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.161511 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.161542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.161573 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.264318 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.264372 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.264385 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.264400 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.264414 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.367344 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.367432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.367457 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.367486 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.367509 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.470786 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.470855 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.470874 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.470900 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.470923 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.498226 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.498279 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:18 crc kubenswrapper[4909]: E1126 07:01:18.498451 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:18 crc kubenswrapper[4909]: E1126 07:01:18.498878 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.498998 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:18 crc kubenswrapper[4909]: E1126 07:01:18.499250 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.517289 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.530638 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.541582 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.562752 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.574161 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.574194 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.574202 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.574214 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.574224 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.574418 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.600912 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.615450 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.631455 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.646257 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.660848 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.673238 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.677086 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.677159 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.677178 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.677204 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.677219 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.692427 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.706770 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.722008 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.735484 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.748180 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.779433 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.779466 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.779477 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.779491 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.779500 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.882123 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.882671 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.882684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.882717 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.882732 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.916245 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.934193 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.941379 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.966011 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.983868 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:18Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.985675 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.985747 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.985767 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.985788 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:18 crc kubenswrapper[4909]: I1126 07:01:18.985805 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:18Z","lastTransitionTime":"2025-11-26T07:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.006372 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.029691 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.043077 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.064458 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.086496 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.088396 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.088459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.088480 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.088501 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.088515 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.095785 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.107349 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.129119 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.144815 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.158385 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.173214 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.189079 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.190823 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.190870 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.190889 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.190912 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.190928 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.202555 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.294507 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.294563 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.294582 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.294681 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.294702 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.398407 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.398485 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.398508 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.398536 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.398559 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.498088 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:19 crc kubenswrapper[4909]: E1126 07:01:19.499039 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.500118 4909 scope.go:117] "RemoveContainer" containerID="5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.501302 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.501342 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.501362 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.501386 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.501406 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.607499 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.607535 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.607544 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.607558 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.607569 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.710325 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.710380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.710391 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.710411 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.710424 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.813370 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.813402 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.813412 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.813427 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.813437 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.915906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.915957 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.915966 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.915986 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.915997 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:19Z","lastTransitionTime":"2025-11-26T07:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.940818 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/1.log" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.943700 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5"} Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.944535 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.965685 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.981153 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:19 crc kubenswrapper[4909]: I1126 07:01:19.992672 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:19Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.009511 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.017996 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.018046 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.018058 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.018077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.018091 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.022841 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.035543 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.048973 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.064649 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.082842 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.104373 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.115764 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.119915 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.119954 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.119965 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.119979 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.119988 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.126992 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.136302 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.145087 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.155265 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.166966 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.177977 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.222479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.222533 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.222545 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.222563 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.222577 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.325779 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.325867 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.325889 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.325917 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.325935 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.362666 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.362802 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.362883 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.362920 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.362966 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:01:52.362925147 +0000 UTC m=+84.509136353 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.363037 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363096 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363127 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363149 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363161 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363198 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363224 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:52.363198544 +0000 UTC m=+84.509409740 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363274 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:52.363251935 +0000 UTC m=+84.509463141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363207 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363311 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363387 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:52.363359318 +0000 UTC m=+84.509570564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363174 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.363497 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:01:52.363473691 +0000 UTC m=+84.509684987 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.428338 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.428389 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.428405 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.428425 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.428439 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.497863 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.497988 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.498133 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.498184 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.498304 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.498436 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.531399 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.531442 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.531452 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.531466 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.531477 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.633743 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.633809 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.633830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.633859 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.633899 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.736019 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.736111 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.736129 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.736154 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.736170 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.838829 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.838882 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.838901 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.838925 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.838942 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.942664 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.942717 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.942730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.942750 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.942764 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:20Z","lastTransitionTime":"2025-11-26T07:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.948463 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/2.log" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.949242 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/1.log" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.952097 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5" exitCode=1 Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.952136 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5"} Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.952170 4909 scope.go:117] "RemoveContainer" containerID="5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.952945 4909 scope.go:117] "RemoveContainer" containerID="0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5" Nov 26 07:01:20 crc kubenswrapper[4909]: E1126 07:01:20.953119 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.973840 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:20 crc kubenswrapper[4909]: I1126 07:01:20.986894 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:20Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.002641 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.023251 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.046110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.046172 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.046197 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.046222 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.046239 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.047945 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.061726 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.077835 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.091516 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.109545 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.126900 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.149435 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.149494 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.149511 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.149534 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.149551 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.153212 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.167941 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.185626 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.201696 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.230828 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5413bee1a7d93580f894a312f51654df3638d83f075ea2225023942ae1c74688\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 07:01:05.865356 6373 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 07:01:05.865382 6373 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 07:01:05.865416 6373 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 07:01:05.865390 6373 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865486 6373 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 07:01:05.865538 6373 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 07:01:05.865548 6373 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1126 07:01:05.865552 6373 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 07:01:05.865584 6373 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 07:01:05.865615 6373 factory.go:656] Stopping watch factory\\\\nI1126 07:01:05.865635 6373 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 07:01:05.865986 6373 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:05.866131 6373 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:05.866214 6373 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:05.866276 6373 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:05.866397 6373 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.246111 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.252694 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.252733 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.252745 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.252762 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.252775 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.262489 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.355763 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.355834 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.355857 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.355899 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.355922 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.459501 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.459569 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.459672 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.459707 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.459725 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.498114 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:21 crc kubenswrapper[4909]: E1126 07:01:21.498250 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.563416 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.563502 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.563521 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.563548 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.563567 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.666044 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.666111 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.666135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.666167 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.666195 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.768859 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.768922 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.768940 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.768962 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.768979 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.871686 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.871749 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.871771 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.871795 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.871809 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.956565 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/2.log" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.959696 4909 scope.go:117] "RemoveContainer" containerID="0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5" Nov 26 07:01:21 crc kubenswrapper[4909]: E1126 07:01:21.959955 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.973767 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.973804 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.973814 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.973830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.973842 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:21Z","lastTransitionTime":"2025-11-26T07:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.974530 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.986260 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:21 crc kubenswrapper[4909]: I1126 07:01:21.997134 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:21Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.017736 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.061830 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.076150 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.076182 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.076192 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.076207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.076218 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.087413 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.103582 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.117184 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.130751 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.141786 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.156224 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.167580 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.178897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.178935 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.178947 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.178964 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.178976 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.178952 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.191050 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.203215 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.218040 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.238700 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:22Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.282485 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.282558 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.282573 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.282613 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.282628 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.386102 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.386161 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.386176 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.386195 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.386209 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.489487 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.489561 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.489577 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.489628 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.489643 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.498059 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.498139 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:22 crc kubenswrapper[4909]: E1126 07:01:22.498170 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.498184 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:22 crc kubenswrapper[4909]: E1126 07:01:22.498284 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:22 crc kubenswrapper[4909]: E1126 07:01:22.498512 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.592239 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.592310 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.592322 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.592340 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.592351 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.695938 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.696002 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.696020 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.696044 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.696061 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.799284 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.799361 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.799375 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.799397 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.799411 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.902370 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.902422 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.902434 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.902453 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:22 crc kubenswrapper[4909]: I1126 07:01:22.902467 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:22Z","lastTransitionTime":"2025-11-26T07:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.005447 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.005524 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.005534 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.005554 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.005573 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.109578 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.109736 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.109762 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.109798 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.109818 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.212667 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.212737 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.212755 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.212778 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.212799 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.319930 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.319983 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.319992 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.320008 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.320022 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.423221 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.423287 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.423304 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.423329 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.423348 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.498919 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:23 crc kubenswrapper[4909]: E1126 07:01:23.499159 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.526408 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.526470 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.526491 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.526514 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.526532 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.629384 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.629454 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.629472 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.629494 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.629581 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.732486 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.732557 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.732576 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.732643 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.732663 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: E1126 07:01:23.801157 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:23 crc kubenswrapper[4909]: E1126 07:01:23.801293 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:01:39.801262016 +0000 UTC m=+71.947473222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.800959 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.835633 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.835711 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.835734 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.835769 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.835792 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.939051 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.939118 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.939135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.939156 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:23 crc kubenswrapper[4909]: I1126 07:01:23.939169 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:23Z","lastTransitionTime":"2025-11-26T07:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.041533 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.041745 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.041776 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.041813 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.041842 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.144763 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.144793 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.144802 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.144816 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.144827 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.247849 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.247946 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.247969 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.247999 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.248020 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.350560 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.350668 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.350691 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.350718 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.350738 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.454391 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.454448 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.454462 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.454483 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.454498 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.498707 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.498750 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:24 crc kubenswrapper[4909]: E1126 07:01:24.498840 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.498953 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:24 crc kubenswrapper[4909]: E1126 07:01:24.499097 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:24 crc kubenswrapper[4909]: E1126 07:01:24.499364 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.556931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.556986 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.557003 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.557027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.557044 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.660707 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.660785 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.660808 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.660835 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.660854 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.763157 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.763200 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.763212 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.763228 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.763239 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.867858 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.867909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.867921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.867940 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.867998 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.969780 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.969841 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.969849 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.969863 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:24 crc kubenswrapper[4909]: I1126 07:01:24.969873 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:24Z","lastTransitionTime":"2025-11-26T07:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.073297 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.073374 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.073397 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.073429 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.073453 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.175974 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.176035 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.176049 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.176071 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.176084 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.202372 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.202432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.202451 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.202479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.202499 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: E1126 07:01:25.223934 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:25Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.229164 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.229230 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.229245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.229267 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.229280 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: E1126 07:01:25.256684 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:25Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.262386 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.262429 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.262441 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.262461 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.262474 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: E1126 07:01:25.281846 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:25Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.287302 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.287379 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.287406 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.287435 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.287457 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: E1126 07:01:25.309483 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:25Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.315452 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.315536 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.315556 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.315580 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.315658 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: E1126 07:01:25.337111 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:25Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:25 crc kubenswrapper[4909]: E1126 07:01:25.337263 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.339685 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.339749 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.339768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.339795 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.339814 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.443762 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.443821 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.443835 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.443859 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.443875 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.498718 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:25 crc kubenswrapper[4909]: E1126 07:01:25.498921 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.547427 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.547499 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.547523 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.547552 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.547576 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.649728 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.649769 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.649810 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.649824 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.649835 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.753351 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.753418 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.753434 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.753456 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.753468 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.856116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.856171 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.856179 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.856193 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.856204 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.959553 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.959668 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.959695 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.959727 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:25 crc kubenswrapper[4909]: I1126 07:01:25.959747 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:25Z","lastTransitionTime":"2025-11-26T07:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.062430 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.062517 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.062541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.062575 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.062630 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.165480 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.165552 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.165658 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.165691 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.165711 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.268578 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.268643 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.268654 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.268673 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.268704 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.371784 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.371866 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.371883 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.371905 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.371922 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.480255 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.480481 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.480493 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.480515 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.480530 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.497986 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.498090 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.498037 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:26 crc kubenswrapper[4909]: E1126 07:01:26.498322 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:26 crc kubenswrapper[4909]: E1126 07:01:26.498408 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:26 crc kubenswrapper[4909]: E1126 07:01:26.498489 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.583267 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.583315 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.583326 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.583344 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.583358 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.686264 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.686300 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.686307 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.686320 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.686328 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.789959 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.790089 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.790111 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.790135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.790152 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.893830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.893910 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.893931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.893956 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.893973 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.996924 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.997004 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.997024 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.997051 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:26 crc kubenswrapper[4909]: I1126 07:01:26.997071 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:26Z","lastTransitionTime":"2025-11-26T07:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.100277 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.100409 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.100432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.100471 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.100494 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.204123 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.204185 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.204202 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.204228 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.204244 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.307155 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.307214 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.307226 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.307240 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.307252 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.411115 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.411194 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.411217 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.411268 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.411313 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.498518 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:27 crc kubenswrapper[4909]: E1126 07:01:27.498801 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.514012 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.514079 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.514102 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.514132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.514159 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.618169 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.618269 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.618286 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.618339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.618358 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.722083 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.722186 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.722211 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.722282 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.722313 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.825444 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.825540 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.825635 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.825713 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.825737 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.928886 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.928955 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.928974 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.928999 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:27 crc kubenswrapper[4909]: I1126 07:01:27.929023 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:27Z","lastTransitionTime":"2025-11-26T07:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.032647 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.032703 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.032713 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.032727 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.032737 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.135688 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.135733 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.135744 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.135760 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.135771 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.239541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.239630 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.239650 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.239676 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.239695 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.342459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.342511 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.342525 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.342551 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.342565 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.450690 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.450750 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.450769 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.450794 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.450813 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.497929 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.497929 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.498044 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:28 crc kubenswrapper[4909]: E1126 07:01:28.498164 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:28 crc kubenswrapper[4909]: E1126 07:01:28.498340 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:28 crc kubenswrapper[4909]: E1126 07:01:28.498444 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.518604 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.538487 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.554122 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.554210 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.554222 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.554242 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.554257 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.558488 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.573666 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.591579 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.614724 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.631028 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.650185 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.657836 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.657923 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.657939 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.657998 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.658015 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.663529 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.675554 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.698733 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.715751 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.728992 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.745143 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.757066 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.760295 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.760319 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.760327 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.760339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.760349 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.779579 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.791461 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:28Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.866464 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.866647 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.866719 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.866769 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.866787 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.970366 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.970405 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.970419 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.970438 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:28 crc kubenswrapper[4909]: I1126 07:01:28.970453 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:28Z","lastTransitionTime":"2025-11-26T07:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.073935 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.073997 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.074013 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.074036 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.074055 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.176981 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.177033 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.177049 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.177072 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.177089 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.281436 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.281512 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.281537 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.281567 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.281620 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.384826 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.384880 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.384898 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.384923 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.384940 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.487573 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.487654 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.487673 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.487695 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.487712 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.498506 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:29 crc kubenswrapper[4909]: E1126 07:01:29.498724 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.591012 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.591082 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.591108 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.591141 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.591163 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.694881 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.694988 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.695006 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.695033 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.695053 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.798232 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.798309 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.798332 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.798364 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.798388 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.901403 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.901512 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.901524 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.901542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:29 crc kubenswrapper[4909]: I1126 07:01:29.901552 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:29Z","lastTransitionTime":"2025-11-26T07:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.005247 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.005338 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.005364 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.005395 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.005417 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.109009 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.109060 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.109077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.109099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.109116 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.213099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.213161 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.213178 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.213203 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.213224 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.316765 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.316826 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.316838 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.316857 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.316870 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.420541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.420666 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.420692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.420721 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.420779 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.499846 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.499901 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.499933 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:30 crc kubenswrapper[4909]: E1126 07:01:30.500035 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:30 crc kubenswrapper[4909]: E1126 07:01:30.500174 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:30 crc kubenswrapper[4909]: E1126 07:01:30.500330 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.523711 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.523779 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.523826 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.523870 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.523889 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.629981 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.630048 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.630065 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.630095 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.630121 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.733212 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.733370 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.733404 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.733438 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.733460 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.836783 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.836844 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.836862 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.836884 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.836901 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.939786 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.939857 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.939876 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.939901 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:30 crc kubenswrapper[4909]: I1126 07:01:30.939920 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:30Z","lastTransitionTime":"2025-11-26T07:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.042971 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.043017 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.043027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.043044 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.043057 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.146391 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.146434 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.146447 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.146465 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.146477 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.250351 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.250443 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.250471 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.250503 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.250522 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.354463 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.354512 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.354522 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.354541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.354554 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.457654 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.457714 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.457734 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.457768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.457786 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.498807 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:31 crc kubenswrapper[4909]: E1126 07:01:31.499010 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.560930 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.561007 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.561031 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.561060 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.561083 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.672116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.672189 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.672212 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.672241 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.672264 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.775468 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.775535 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.775557 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.775584 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.775666 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.879159 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.879233 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.879253 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.879313 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.879342 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.982878 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.982918 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.982930 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.982945 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:31 crc kubenswrapper[4909]: I1126 07:01:31.982957 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:31Z","lastTransitionTime":"2025-11-26T07:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.085823 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.085889 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.085911 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.085936 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.085954 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.189194 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.189231 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.189240 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.189252 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.189261 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.292029 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.292083 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.292099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.292116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.292128 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.395279 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.395314 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.395323 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.395336 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.395346 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.497998 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.498041 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.498003 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:32 crc kubenswrapper[4909]: E1126 07:01:32.498230 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.498306 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.498335 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.498347 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.498361 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: E1126 07:01:32.498373 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.498372 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: E1126 07:01:32.498456 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.601410 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.601476 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.601496 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.601521 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.601539 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.704149 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.704233 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.704252 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.704273 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.704287 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.806948 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.806994 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.807010 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.807030 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.807046 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.912375 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.912416 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.912426 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.912440 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:32 crc kubenswrapper[4909]: I1126 07:01:32.912448 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:32Z","lastTransitionTime":"2025-11-26T07:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.026207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.026238 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.026250 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.026289 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.026303 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.128571 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.128632 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.128644 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.128658 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.128667 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.231114 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.231150 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.231158 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.231171 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.231180 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.334193 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.334238 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.334247 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.334265 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.334276 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.436448 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.436492 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.436503 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.436519 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.436528 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.498303 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:33 crc kubenswrapper[4909]: E1126 07:01:33.498406 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.538692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.538730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.538740 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.538755 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.538765 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.640663 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.640700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.640711 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.640726 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.640737 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.742956 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.742987 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.742996 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.743010 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.743019 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.845140 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.845180 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.845191 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.845206 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.845216 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.947641 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.947684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.947694 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.947710 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:33 crc kubenswrapper[4909]: I1126 07:01:33.947722 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:33Z","lastTransitionTime":"2025-11-26T07:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.049794 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.049835 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.049847 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.049866 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.049875 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.152683 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.152738 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.152749 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.152768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.152779 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.255857 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.255914 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.255933 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.255957 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.255976 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.359919 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.359984 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.360004 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.360028 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.360046 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.462337 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.462690 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.462703 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.462721 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.462733 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.529792 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.529810 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.529887 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.529964 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:34 crc kubenswrapper[4909]: E1126 07:01:34.530098 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:34 crc kubenswrapper[4909]: E1126 07:01:34.530225 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:34 crc kubenswrapper[4909]: E1126 07:01:34.530354 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:34 crc kubenswrapper[4909]: E1126 07:01:34.530702 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.543734 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.565481 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.565530 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.565546 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.565565 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.565580 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.668525 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.668571 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.668606 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.668626 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.668639 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.771608 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.771653 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.771667 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.771687 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.771700 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.874794 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.874834 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.874844 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.874859 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.874868 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.977788 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.977842 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.977853 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.977870 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:34 crc kubenswrapper[4909]: I1126 07:01:34.977880 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:34Z","lastTransitionTime":"2025-11-26T07:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.080027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.080063 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.080073 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.080087 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.080096 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.183240 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.183328 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.183347 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.183373 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.183391 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.286642 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.287068 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.287173 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.287282 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.287352 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.390004 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.390093 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.390142 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.390162 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.390174 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.492847 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.492944 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.492965 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.492988 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.493006 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.596358 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.596423 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.596450 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.596478 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.596502 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.698434 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.698502 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.698514 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.698530 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.698564 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.716326 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.716360 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.716373 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.716386 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.716396 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: E1126 07:01:35.731125 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.735530 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.735564 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.735574 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.735585 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.735608 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: E1126 07:01:35.755146 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.759064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.759135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.759146 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.759165 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.759177 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: E1126 07:01:35.777100 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.780762 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.780820 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.780838 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.780856 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.780873 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: E1126 07:01:35.798988 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.804930 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.805441 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.805516 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.805619 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.805741 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: E1126 07:01:35.818489 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:35Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:35 crc kubenswrapper[4909]: E1126 07:01:35.818918 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.821190 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.821313 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.821394 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.821474 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.821543 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.923859 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.923907 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.923916 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.923931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:35 crc kubenswrapper[4909]: I1126 07:01:35.923942 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:35Z","lastTransitionTime":"2025-11-26T07:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.025964 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.026014 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.026029 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.026046 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.026058 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.129056 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.129110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.129122 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.129141 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.129153 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.231465 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.231759 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.231830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.231900 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.231967 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.334957 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.335007 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.335019 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.335037 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.335048 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.439760 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.439808 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.439817 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.439831 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.439842 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.498831 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.498964 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.499002 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:36 crc kubenswrapper[4909]: E1126 07:01:36.499008 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.499033 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:36 crc kubenswrapper[4909]: E1126 07:01:36.499082 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:36 crc kubenswrapper[4909]: E1126 07:01:36.499192 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:36 crc kubenswrapper[4909]: E1126 07:01:36.499269 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.542810 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.543214 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.543400 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.543583 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.543803 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.647022 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.647316 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.647391 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.647462 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.647524 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.750822 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.750882 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.750893 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.750914 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.750927 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.854321 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.854407 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.854421 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.854449 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.854468 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.956907 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.957338 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.957542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.957771 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:36 crc kubenswrapper[4909]: I1126 07:01:36.957821 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:36Z","lastTransitionTime":"2025-11-26T07:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.081343 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.081418 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.081445 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.081479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.081504 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.183829 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.183871 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.183883 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.183897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.183909 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.286385 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.286429 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.286441 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.286457 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.286473 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.389422 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.389461 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.389469 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.389482 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.389491 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.492830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.492868 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.492877 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.492890 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.492900 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.499455 4909 scope.go:117] "RemoveContainer" containerID="0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5" Nov 26 07:01:37 crc kubenswrapper[4909]: E1126 07:01:37.499724 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.596323 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.596376 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.596390 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.596407 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.596420 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.700187 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.700246 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.700265 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.700288 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.700303 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.803025 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.803115 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.803132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.803169 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.803188 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.905453 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.905535 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.905552 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.905577 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:37 crc kubenswrapper[4909]: I1126 07:01:37.905649 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:37Z","lastTransitionTime":"2025-11-26T07:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.007292 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.007337 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.007347 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.007410 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.007424 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.110013 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.110054 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.110066 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.110082 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.110094 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.212905 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.212943 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.212954 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.212969 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.212979 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.315213 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.315268 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.315284 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.315302 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.315314 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.417952 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.418334 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.418376 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.418407 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.418431 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.498085 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.498153 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.498099 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.498321 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:38 crc kubenswrapper[4909]: E1126 07:01:38.498291 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:38 crc kubenswrapper[4909]: E1126 07:01:38.498411 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:38 crc kubenswrapper[4909]: E1126 07:01:38.498470 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:38 crc kubenswrapper[4909]: E1126 07:01:38.498491 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.514342 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.522541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.522634 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.522647 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.522683 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.522696 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.527488 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.539111 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.558035 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.572650 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.592991 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.603072 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.614040 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.624211 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.626061 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.626094 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.626105 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.626122 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.626134 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.637095 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.649680 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.661055 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.673906 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.684489 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.693962 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.705433 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.716578 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.728355 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.728407 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.728420 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.728437 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.728450 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.728972 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:38Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.832940 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.833014 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.833041 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.833071 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.833097 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.935223 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.935262 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.935271 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.935286 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:38 crc kubenswrapper[4909]: I1126 07:01:38.935314 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:38Z","lastTransitionTime":"2025-11-26T07:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.037449 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.037525 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.037544 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.037580 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.037615 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.140497 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.140547 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.140565 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.140586 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.140632 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.243214 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.243267 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.243283 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.243306 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.243323 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.346078 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.346143 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.346155 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.346174 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.346186 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.449119 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.449173 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.449186 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.449207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.449222 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.553715 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.553801 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.553825 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.553906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.553989 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.656175 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.656208 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.656218 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.656229 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.656238 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.758644 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.758686 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.758699 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.758716 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.758729 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.810747 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:39 crc kubenswrapper[4909]: E1126 07:01:39.810865 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:39 crc kubenswrapper[4909]: E1126 07:01:39.810925 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:02:11.810911371 +0000 UTC m=+103.957122537 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.861863 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.861895 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.861906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.861918 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.861926 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.964766 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.964803 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.964813 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.964847 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:39 crc kubenswrapper[4909]: I1126 07:01:39.964857 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:39Z","lastTransitionTime":"2025-11-26T07:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.067696 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.067745 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.067761 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.067781 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.067795 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.170551 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.170609 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.170619 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.170633 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.170643 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.272451 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.272497 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.272509 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.272524 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.272537 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.374901 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.374936 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.374944 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.374956 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.374965 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.477884 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.477939 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.477953 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.477969 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.477982 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.498513 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.498513 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.498547 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.498625 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:40 crc kubenswrapper[4909]: E1126 07:01:40.498720 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:40 crc kubenswrapper[4909]: E1126 07:01:40.498776 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:40 crc kubenswrapper[4909]: E1126 07:01:40.498907 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:40 crc kubenswrapper[4909]: E1126 07:01:40.498986 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.581134 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.581175 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.581186 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.581199 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.581211 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.682899 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.682940 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.682951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.682968 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.682981 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.784827 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.784878 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.784892 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.784906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.784917 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.889886 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.889929 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.889945 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.889968 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.889984 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.992465 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.992526 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.992742 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.992789 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:40 crc kubenswrapper[4909]: I1126 07:01:40.992812 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:40Z","lastTransitionTime":"2025-11-26T07:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.094918 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.094983 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.095000 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.095022 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.095037 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.197077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.197143 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.197152 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.197166 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.197175 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.299701 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.299741 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.299750 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.299764 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.299774 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.402935 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.403031 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.403050 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.403071 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.403088 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.505685 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.505716 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.505724 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.505735 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.505747 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.608488 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.608553 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.608573 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.608640 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.608669 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.711951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.712078 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.712095 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.712117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.712133 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.814527 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.814579 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.814623 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.814648 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.814664 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.917751 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.917830 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.917862 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.917891 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:41 crc kubenswrapper[4909]: I1126 07:01:41.917908 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:41Z","lastTransitionTime":"2025-11-26T07:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.021957 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.022036 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.022058 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.022112 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.022139 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.023826 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/0.log" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.023894 4909 generic.go:334] "Generic (PLEG): container finished" podID="3d586ea3-b189-476f-9e44-4579388f3107" containerID="a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419" exitCode=1 Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.023956 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerDied","Data":"a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.024949 4909 scope.go:117] "RemoveContainer" containerID="a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.053831 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.068980 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.086066 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.102767 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.125533 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.125577 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.125610 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.125688 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.125771 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.128317 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.142925 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.159201 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.173266 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.186664 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.202487 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.218234 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.228451 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.228490 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.228499 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.228513 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.228522 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.232344 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.247132 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.259249 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.273359 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.287442 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.298515 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.314263 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:42Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.331210 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.331260 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.331295 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.331314 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.331335 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.433812 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.433892 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.433981 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.434000 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.434012 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.498955 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.499046 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.498969 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.499060 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:42 crc kubenswrapper[4909]: E1126 07:01:42.499160 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:42 crc kubenswrapper[4909]: E1126 07:01:42.499277 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:42 crc kubenswrapper[4909]: E1126 07:01:42.499399 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:42 crc kubenswrapper[4909]: E1126 07:01:42.499442 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.537135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.537182 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.537191 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.537205 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.537217 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.640396 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.640449 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.640470 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.640499 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.640518 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.742563 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.742641 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.742657 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.742678 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.742694 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.846077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.846175 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.846204 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.846230 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.846248 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.948500 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.948581 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.948640 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.948673 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:42 crc kubenswrapper[4909]: I1126 07:01:42.948698 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:42Z","lastTransitionTime":"2025-11-26T07:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.030079 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/0.log" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.030175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerStarted","Data":"e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.051652 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.051714 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.051731 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.051757 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.051776 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.056238 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.073742 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.088910 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.110734 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.139237 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.154996 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.155065 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.155087 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.155117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.155139 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.158845 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.173037 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.186756 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.202945 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.226161 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.241855 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.257160 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.258435 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.258517 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.258546 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.258579 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.258643 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.270149 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.283215 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.297726 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.311418 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.322908 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.333068 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:43Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.360970 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.361002 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.361012 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.361027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.361036 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.463530 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.463622 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.463647 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.463671 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.463685 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.566664 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.566722 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.566744 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.566770 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.566784 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.669046 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.669122 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.669157 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.669188 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.669209 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.772174 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.772348 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.772375 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.772406 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.772426 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.875169 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.875216 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.875227 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.875242 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.875253 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.979388 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.979442 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.979459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.979479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:43 crc kubenswrapper[4909]: I1126 07:01:43.979493 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:43Z","lastTransitionTime":"2025-11-26T07:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.082720 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.082778 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.082790 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.082807 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.082820 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.185282 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.185341 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.185357 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.185378 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.185394 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.287898 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.287949 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.287985 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.288023 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.288047 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.391287 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.391373 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.391399 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.391432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.391456 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.494295 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.494372 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.494396 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.494428 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.494465 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.498875 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:44 crc kubenswrapper[4909]: E1126 07:01:44.499052 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.499375 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:44 crc kubenswrapper[4909]: E1126 07:01:44.499528 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.499843 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.499933 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:44 crc kubenswrapper[4909]: E1126 07:01:44.499953 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:44 crc kubenswrapper[4909]: E1126 07:01:44.500122 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.597110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.597202 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.597230 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.597262 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.597284 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.707307 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.707397 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.707422 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.707454 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.707477 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.810921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.810987 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.811006 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.811035 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.811061 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.914124 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.914208 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.914231 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.914305 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:44 crc kubenswrapper[4909]: I1126 07:01:44.914329 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:44Z","lastTransitionTime":"2025-11-26T07:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.018151 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.018203 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.018219 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.018241 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.018257 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.120637 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.120665 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.120678 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.120692 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.120702 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.223723 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.223771 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.223788 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.223810 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.223826 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.326452 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.326512 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.326528 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.326550 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.326566 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.429323 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.429411 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.429436 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.429891 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.430435 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.534890 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.534938 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.534955 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.534979 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.534996 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.638333 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.638399 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.638418 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.638443 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.638459 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.743195 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.743241 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.743252 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.743276 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.743294 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.846805 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.846871 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.846896 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.846923 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.846946 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.915467 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.915532 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.915554 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.915582 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.915636 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: E1126 07:01:45.938135 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.944879 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.944938 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.944954 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.944978 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.944999 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: E1126 07:01:45.966467 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.971623 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.971657 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.971670 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.971688 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.971700 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:45 crc kubenswrapper[4909]: E1126 07:01:45.993708 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:45Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.998010 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.998051 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.998064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.998085 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:45 crc kubenswrapper[4909]: I1126 07:01:45.998098 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:45Z","lastTransitionTime":"2025-11-26T07:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: E1126 07:01:46.018049 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.023453 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.023557 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.023570 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.023603 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.023616 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: E1126 07:01:46.043843 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:46Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:46 crc kubenswrapper[4909]: E1126 07:01:46.044010 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.045631 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.045669 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.045687 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.045705 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.045719 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.149891 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.149976 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.150020 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.150055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.150079 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.253537 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.253582 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.253626 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.253648 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.253664 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.357284 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.357353 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.357365 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.357387 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.357401 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.460806 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.460862 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.460871 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.460897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.460911 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.498873 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.498875 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.499018 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.498944 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:46 crc kubenswrapper[4909]: E1126 07:01:46.499355 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:46 crc kubenswrapper[4909]: E1126 07:01:46.499494 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:46 crc kubenswrapper[4909]: E1126 07:01:46.499631 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:46 crc kubenswrapper[4909]: E1126 07:01:46.499710 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.564798 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.564864 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.564889 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.564919 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.564941 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.667979 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.668052 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.668064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.668101 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.668117 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.771044 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.771079 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.771088 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.771101 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.771111 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.873321 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.873376 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.873420 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.873444 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.873456 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.976451 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.976511 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.976522 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.976536 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:46 crc kubenswrapper[4909]: I1126 07:01:46.976545 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:46Z","lastTransitionTime":"2025-11-26T07:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.079141 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.079196 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.079213 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.079234 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.079253 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.181927 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.181965 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.181975 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.182010 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.182024 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.284768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.284810 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.284840 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.284855 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.284865 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.388586 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.388691 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.388711 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.389127 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.389409 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.493280 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.493356 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.493379 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.493409 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.493433 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.595921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.596011 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.596025 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.596048 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.596409 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.699158 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.699184 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.699193 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.699205 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.699213 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.801861 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.801941 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.801966 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.801999 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.802024 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.904456 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.904565 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.904622 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.904653 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:47 crc kubenswrapper[4909]: I1126 07:01:47.904674 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:47Z","lastTransitionTime":"2025-11-26T07:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.007432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.007471 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.007482 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.007528 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.007543 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.110139 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.110186 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.110198 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.110214 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.110226 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.213873 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.213951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.213967 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.213988 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.214018 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.317245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.317300 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.317315 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.317341 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.317360 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.419854 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.419897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.419906 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.419920 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.419929 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.498889 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.499006 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.499108 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.499214 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:48 crc kubenswrapper[4909]: E1126 07:01:48.499210 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:48 crc kubenswrapper[4909]: E1126 07:01:48.499396 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:48 crc kubenswrapper[4909]: E1126 07:01:48.499569 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:48 crc kubenswrapper[4909]: E1126 07:01:48.499663 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.515829 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.523861 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.523927 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.523949 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.523980 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.524001 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.531047 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.553203 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.573425 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.590573 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.611316 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.626025 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.626066 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.626079 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.626097 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.626113 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.630515 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.666010 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.704298 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.716661 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.728643 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.728674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.728685 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.728699 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.728708 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.729628 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.742506 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.754780 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.768641 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.784275 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.800499 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.811093 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.822508 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:48Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.831453 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.831492 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.831505 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.831520 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.831529 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.935286 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.935338 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.935356 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.935379 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:48 crc kubenswrapper[4909]: I1126 07:01:48.935395 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:48Z","lastTransitionTime":"2025-11-26T07:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.039540 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.040057 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.040077 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.040103 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.040121 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.143895 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.143956 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.143974 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.143997 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.144015 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.248019 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.248078 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.248096 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.248121 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.248138 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.351194 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.351247 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.351266 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.351286 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.351302 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.454684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.454745 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.454763 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.454786 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.454802 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.499181 4909 scope.go:117] "RemoveContainer" containerID="0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.557892 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.557949 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.557965 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.557987 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.558002 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.660243 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.660319 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.660343 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.660370 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.660387 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.763409 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.763449 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.763459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.763476 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.763488 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.866747 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.866804 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.866822 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.866846 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.866866 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.970297 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.970347 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.970359 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.970377 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:49 crc kubenswrapper[4909]: I1126 07:01:49.970392 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:49Z","lastTransitionTime":"2025-11-26T07:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.055852 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/2.log" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.059038 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.059672 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.073218 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.073346 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.073381 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.073392 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.073408 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.073420 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.086342 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.101374 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.115909 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.128531 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.149497 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.159883 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.173216 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.176277 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.176332 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.176349 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.176367 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.176379 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.193341 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.206297 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.223953 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.237941 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.249508 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.259934 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.272719 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.279027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.279081 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.279093 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.279110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.279121 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.284745 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.294546 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.312725 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:50Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.381264 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.381319 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.381330 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.381345 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.381355 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.484749 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.484799 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.484817 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.484840 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.484857 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.498810 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.498813 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.498882 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:50 crc kubenswrapper[4909]: E1126 07:01:50.499039 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.499062 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:50 crc kubenswrapper[4909]: E1126 07:01:50.499225 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:50 crc kubenswrapper[4909]: E1126 07:01:50.499249 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:50 crc kubenswrapper[4909]: E1126 07:01:50.499314 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.588548 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.588660 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.588680 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.588704 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.588722 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.691838 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.691918 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.691954 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.691985 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.692010 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.794067 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.794100 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.794110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.794148 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.794162 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.897014 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.897080 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.897100 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.897123 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:50 crc kubenswrapper[4909]: I1126 07:01:50.897139 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:50Z","lastTransitionTime":"2025-11-26T07:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.000215 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.000288 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.000309 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.000330 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.000345 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.065675 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/3.log" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.066773 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/2.log" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.070831 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" exitCode=1 Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.070876 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.070951 4909 scope.go:117] "RemoveContainer" containerID="0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.072090 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:01:51 crc kubenswrapper[4909]: E1126 07:01:51.072396 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.098024 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.107931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.107988 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.108005 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.108027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.108049 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.117253 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.135031 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.154254 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.171809 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.189146 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.206947 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.212321 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.212428 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.212534 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.212744 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.212780 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.231482 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.246963 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.265946 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.287397 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.308077 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.320321 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.320395 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.320413 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.320437 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.320454 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.324800 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.347876 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.367334 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.383878 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.418740 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbf132b7bb5c6bcf12036eb58349b56b6c68214df0b7887a800c1639d57cbd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:20Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290622 6573 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1126 07:01:20.290917 6573 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1126 07:01:20.291035 6573 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1126 07:01:20.291090 6573 ovnkube.go:599] Stopped ovnkube\\\\nI1126 07:01:20.291128 6573 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1126 07:01:20.291216 6573 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:50Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1126 07:01:50.386920 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1126 07:01:50.386920 6964 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 4.688243ms\\\\nI1126 07:01:50.386954 6964 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI1126 07:01:50.386954 6964 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 07:01:50.387016 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.423921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.423991 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.424009 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.424034 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.424060 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.436549 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:51Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.527212 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.527284 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.527307 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.527339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.527361 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.631038 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.631203 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.631233 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.631263 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.631286 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.734678 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.734748 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.734771 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.734803 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.734825 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.837568 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.837665 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.837701 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.837730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.837747 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.940921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.941031 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.941049 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.941071 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:51 crc kubenswrapper[4909]: I1126 07:01:51.941087 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:51Z","lastTransitionTime":"2025-11-26T07:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.044150 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.044209 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.044231 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.044260 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.044281 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.079276 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/3.log" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.084876 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.085124 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.110196 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.132682 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.147256 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.147307 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.147325 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.147348 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.147366 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.149214 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.174410 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.192073 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.209353 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.225651 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.242464 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.250156 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.250207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.250218 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.250237 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.250249 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.259941 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.292061 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:50Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1126 07:01:50.386920 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1126 07:01:50.386920 6964 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 4.688243ms\\\\nI1126 07:01:50.386954 6964 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI1126 07:01:50.386954 6964 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 07:01:50.387016 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.310143 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.325258 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.342340 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.352997 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.353053 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.353073 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.353100 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.353117 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.355202 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.367448 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.377746 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.388639 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.396577 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:52Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.462038 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.462685 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.462211561 +0000 UTC m=+148.608422757 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.462766 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.462824 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.462880 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.462943 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.462982 4909 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.462985 4909 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463031 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.463017602 +0000 UTC m=+148.609228768 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.462984 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463108 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.463079764 +0000 UTC m=+148.609290970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463113 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463139 4909 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463151 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463173 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.463163486 +0000 UTC m=+148.609374652 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463174 4909 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463200 4909 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.463257 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.463239918 +0000 UTC m=+148.609451124 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.463943 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.463981 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.463992 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.464007 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.464017 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.498866 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.498895 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.498920 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.498866 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.499092 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.499214 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.499545 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:52 crc kubenswrapper[4909]: E1126 07:01:52.499707 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.566159 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.566209 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.566225 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.566242 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.566254 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.669866 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.669943 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.669965 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.669993 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.670015 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.774276 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.774340 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.774361 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.774383 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.774400 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.877739 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.877778 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.877788 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.877803 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.877814 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.980883 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.980960 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.980979 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.981005 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:52 crc kubenswrapper[4909]: I1126 07:01:52.981027 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:52Z","lastTransitionTime":"2025-11-26T07:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.084147 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.084223 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.084245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.084275 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.084298 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.186893 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.186930 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.186941 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.186957 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.186969 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.290478 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.290556 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.290579 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.290655 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.290680 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.394309 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.394380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.394403 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.394429 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.394449 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.496457 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.496554 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.496584 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.496649 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.496670 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.600024 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.600069 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.600102 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.600122 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.600133 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.703294 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.703362 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.703380 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.703404 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.703421 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.806912 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.806976 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.806992 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.807016 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.807032 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.910294 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.910364 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.910382 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.910410 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:53 crc kubenswrapper[4909]: I1126 07:01:53.910432 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:53Z","lastTransitionTime":"2025-11-26T07:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.013732 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.013799 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.013817 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.013842 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.013859 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.116545 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.116683 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.116716 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.116744 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.116762 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.219697 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.219774 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.219797 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.219828 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.219850 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.323397 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.323479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.323503 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.323533 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.323557 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.427036 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.427135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.427193 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.427220 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.427239 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.498926 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.498993 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.499010 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.499147 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:54 crc kubenswrapper[4909]: E1126 07:01:54.499295 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:54 crc kubenswrapper[4909]: E1126 07:01:54.499410 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:54 crc kubenswrapper[4909]: E1126 07:01:54.499158 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:54 crc kubenswrapper[4909]: E1126 07:01:54.499681 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.530313 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.530373 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.530390 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.530413 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.530431 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.633918 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.634037 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.634073 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.634099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.634116 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.737367 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.737439 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.737467 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.737497 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.737519 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.841100 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.841181 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.841204 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.841234 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.841253 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.945097 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.945167 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.945185 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.945212 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:54 crc kubenswrapper[4909]: I1126 07:01:54.945230 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:54Z","lastTransitionTime":"2025-11-26T07:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.048148 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.048201 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.048219 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.048247 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.048264 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.150898 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.150947 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.150958 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.150974 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.150986 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.255000 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.255059 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.255083 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.255114 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.255137 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.357955 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.358005 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.358017 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.358034 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.358045 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.461001 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.461037 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.461047 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.461061 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.461073 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.564126 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.564186 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.564206 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.564223 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.564235 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.667273 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.667322 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.667333 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.667350 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.667363 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.771055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.771129 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.771155 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.771188 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.771209 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.874311 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.874402 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.874421 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.874477 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.874500 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.977015 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.977082 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.977098 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.977122 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:55 crc kubenswrapper[4909]: I1126 07:01:55.977140 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:55Z","lastTransitionTime":"2025-11-26T07:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.081757 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.081829 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.081859 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.081882 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.081899 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.118125 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.118190 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.118207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.118228 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.118245 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.138316 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.145076 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.145161 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.145179 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.145203 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.145221 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.164320 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.169255 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.169335 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.169362 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.169394 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.169416 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.189229 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.194032 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.194090 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.194109 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.194133 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.194150 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.215940 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.221256 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.221342 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.221371 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.221404 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.221429 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.244430 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:56Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.244678 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.247303 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.247345 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.247363 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.247384 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.247401 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.349219 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.349258 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.349267 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.349282 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.349291 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.452083 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.452132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.452150 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.452174 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.452194 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.498767 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.498832 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.498934 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.498976 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.498787 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.499133 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.499315 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:56 crc kubenswrapper[4909]: E1126 07:01:56.499453 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.554978 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.555051 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.555072 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.555101 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.555125 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.658457 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.658504 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.658520 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.658542 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.658560 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.761810 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.761894 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.761916 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.761947 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.761969 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.865416 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.865508 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.865521 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.865538 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.865550 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.968485 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.968559 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.968582 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.968655 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:56 crc kubenswrapper[4909]: I1126 07:01:56.968683 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:56Z","lastTransitionTime":"2025-11-26T07:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.071505 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.071563 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.071628 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.071660 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.071684 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.175331 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.175414 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.175439 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.175471 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.175493 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.278123 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.278485 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.278503 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.278527 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.278544 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.382867 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.382909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.382921 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.382936 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.382947 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.484775 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.484802 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.484809 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.484823 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.484832 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.586808 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.586852 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.586863 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.586878 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.586888 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.689666 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.689714 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.689731 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.689755 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.689771 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.792709 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.792779 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.792805 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.792835 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.792859 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.896313 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.896362 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.896383 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.896402 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:57 crc kubenswrapper[4909]: I1126 07:01:57.896417 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:57Z","lastTransitionTime":"2025-11-26T07:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.000029 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.000101 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.000113 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.000132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.000163 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.103745 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.103778 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.103785 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.103798 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.103806 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.206767 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.206828 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.206846 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.206872 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.206891 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.308459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.308489 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.308499 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.308516 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.308526 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.411430 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.411513 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.411814 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.411878 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.411891 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.498818 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.498818 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.499116 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:01:58 crc kubenswrapper[4909]: E1126 07:01:58.499301 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.499732 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:01:58 crc kubenswrapper[4909]: E1126 07:01:58.499986 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:01:58 crc kubenswrapper[4909]: E1126 07:01:58.500075 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:01:58 crc kubenswrapper[4909]: E1126 07:01:58.500117 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.514363 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.514410 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.514429 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.514452 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.514469 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.525458 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:50Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1126 07:01:50.386920 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1126 07:01:50.386920 6964 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 4.688243ms\\\\nI1126 07:01:50.386954 6964 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI1126 07:01:50.386954 6964 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 07:01:50.387016 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.540662 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.560132 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.576773 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.597114 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.615816 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.616791 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.616889 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.616909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.616988 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.617074 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.637042 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.653812 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.671818 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.685089 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.702471 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.717612 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.720827 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.720880 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.720890 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.720909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.721214 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.734746 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.751615 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.767609 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.782357 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.801444 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.824629 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.824674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.824684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.824700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.824711 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.826265 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:01:58Z is after 2025-08-24T17:21:41Z" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.929140 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.929193 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.929215 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.929238 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:58 crc kubenswrapper[4909]: I1126 07:01:58.929250 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:58Z","lastTransitionTime":"2025-11-26T07:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.034418 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.034492 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.034504 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.034532 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.034546 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.138515 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.138557 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.138568 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.138602 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.138613 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.240653 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.240702 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.240711 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.240726 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.240738 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.343477 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.343998 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.344066 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.344197 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.344449 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.448029 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.448371 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.448506 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.448758 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.448894 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.552761 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.552818 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.552834 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.552856 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.552877 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.654902 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.654927 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.654935 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.654971 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.654979 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.757364 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.757422 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.757442 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.757466 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.757484 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.860838 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.860910 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.860928 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.860951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.860971 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.963676 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.963746 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.963767 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.963790 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:01:59 crc kubenswrapper[4909]: I1126 07:01:59.963810 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:01:59Z","lastTransitionTime":"2025-11-26T07:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.066012 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.066078 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.066095 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.066123 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.066143 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.168828 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.168898 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.168908 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.168925 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.168935 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.272661 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.272711 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.272728 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.272752 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.272770 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.377184 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.377245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.377269 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.377296 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.377319 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.480174 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.480217 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.480225 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.480239 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.480249 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.499829 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.499858 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.499828 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.499954 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:00 crc kubenswrapper[4909]: E1126 07:02:00.500077 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:00 crc kubenswrapper[4909]: E1126 07:02:00.500240 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:00 crc kubenswrapper[4909]: E1126 07:02:00.500272 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:00 crc kubenswrapper[4909]: E1126 07:02:00.500350 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.582135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.582175 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.582184 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.582199 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.582208 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.684927 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.685090 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.685106 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.685124 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.685135 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.787840 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.787885 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.787893 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.787907 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.787918 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.890484 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.890530 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.890545 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.890561 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.890572 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.992909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.992954 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.992967 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.992985 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:00 crc kubenswrapper[4909]: I1126 07:02:00.993003 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:00Z","lastTransitionTime":"2025-11-26T07:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.095763 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.095824 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.095841 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.095863 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.095883 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.198891 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.198936 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.198946 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.198960 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.198970 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.306354 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.306402 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.306413 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.306430 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.306443 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.409042 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.409082 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.409092 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.409107 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.409118 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.511620 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.511674 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.511690 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.511710 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.511726 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.614790 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.614842 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.614854 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.614871 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.614884 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.717700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.717766 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.717784 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.717811 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.717829 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.820729 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.820808 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.820848 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.820879 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.820902 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.923931 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.924000 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.924022 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.924047 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:01 crc kubenswrapper[4909]: I1126 07:02:01.924065 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:01Z","lastTransitionTime":"2025-11-26T07:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.027070 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.027150 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.027173 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.027201 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.027225 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.130074 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.130135 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.130170 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.130205 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.130231 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.233573 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.233707 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.233727 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.233752 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.233790 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.336818 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.336882 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.336899 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.336922 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.336938 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.439957 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.440007 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.440028 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.440055 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.440076 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.498627 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.498672 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.498684 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:02 crc kubenswrapper[4909]: E1126 07:02:02.498840 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.498885 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:02 crc kubenswrapper[4909]: E1126 07:02:02.498984 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:02 crc kubenswrapper[4909]: E1126 07:02:02.499143 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:02 crc kubenswrapper[4909]: E1126 07:02:02.499289 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.543428 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.543482 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.543493 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.543511 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.543527 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.646662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.646706 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.646719 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.646737 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.646749 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.749691 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.749732 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.749741 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.749759 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.749769 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.853068 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.853118 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.853133 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.853154 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.853169 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.956819 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.956897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.956908 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.956928 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:02 crc kubenswrapper[4909]: I1126 07:02:02.956940 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:02Z","lastTransitionTime":"2025-11-26T07:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.058681 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.058721 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.058729 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.058741 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.058750 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.161324 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.161398 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.161422 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.161455 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.161480 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.264754 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.264799 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.264815 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.264834 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.264851 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.367733 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.367777 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.367792 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.367812 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.367824 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.470607 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.470700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.470717 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.470879 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.470915 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.499028 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:02:03 crc kubenswrapper[4909]: E1126 07:02:03.499211 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.573269 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.573620 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.573670 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.573705 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.573721 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.676552 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.676597 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.676629 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.676644 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.676655 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.781565 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.781689 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.781718 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.781746 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.781769 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.887639 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.887678 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.887696 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.887713 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.887722 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.989495 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.989541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.989563 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.989583 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:03 crc kubenswrapper[4909]: I1126 07:02:03.989628 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:03Z","lastTransitionTime":"2025-11-26T07:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.091490 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.091543 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.091556 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.091573 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.091586 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.194203 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.194245 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.194255 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.194269 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.194280 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.297353 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.297378 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.297386 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.297400 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.297407 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.399866 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.399901 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.399909 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.399923 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.399934 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.498335 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.498368 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.498374 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:04 crc kubenswrapper[4909]: E1126 07:02:04.498461 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.498503 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:04 crc kubenswrapper[4909]: E1126 07:02:04.498620 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:04 crc kubenswrapper[4909]: E1126 07:02:04.498732 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:04 crc kubenswrapper[4909]: E1126 07:02:04.498770 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.501939 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.501996 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.502009 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.502023 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.502057 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.605881 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.605933 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.605951 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.605971 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.605987 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.708236 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.708290 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.708311 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.708339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.708361 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.811367 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.811420 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.811437 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.811458 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.811475 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.914991 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.915065 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.915089 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.915117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:04 crc kubenswrapper[4909]: I1126 07:02:04.915140 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:04Z","lastTransitionTime":"2025-11-26T07:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.017734 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.017809 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.017829 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.017852 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.017870 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.120842 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.120894 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.120911 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.120938 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.120958 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.225319 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.225403 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.225431 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.225465 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.225492 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.328734 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.328804 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.328828 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.328854 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.328877 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.431430 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.431487 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.431505 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.431527 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.431543 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.535238 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.535304 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.535320 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.535342 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.535357 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.638853 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.638924 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.638947 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.638978 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.639000 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.742101 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.742179 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.742202 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.742228 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.742249 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.845427 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.845487 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.845498 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.845515 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.845527 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.948760 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.948822 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.948840 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.948862 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:05 crc kubenswrapper[4909]: I1126 07:02:05.948878 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:05Z","lastTransitionTime":"2025-11-26T07:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.053117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.053179 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.053196 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.053219 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.053236 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.156860 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.156944 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.156969 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.157007 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.157031 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.260503 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.260568 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.260586 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.260644 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.260662 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.363174 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.363253 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.363270 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.363294 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.363314 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.434807 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.434889 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.434925 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.434960 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.434984 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.460113 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.465685 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.465750 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.465775 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.465805 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.465827 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.490406 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.496336 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.496411 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.496435 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.496467 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.496492 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.498528 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.498568 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.498674 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.498771 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.498939 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.499234 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.499467 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.500719 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.518913 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.524206 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.524258 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.524281 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.524309 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.524332 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.546029 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.551029 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.551089 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.551106 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.551129 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.551147 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.573265 4909 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T07:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35ed46f4-00cf-47b9-9f48-1d94d36971ca\\\",\\\"systemUUID\\\":\\\"01e7c3c6-9197-4e45-b0ec-48cc1dbb6b0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:06Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:06 crc kubenswrapper[4909]: E1126 07:02:06.573590 4909 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.576032 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.576111 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.576146 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.576171 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.576188 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.679203 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.679279 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.679305 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.679333 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.679353 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.782452 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.782524 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.782547 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.782575 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.782603 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.885701 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.885766 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.885816 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.885843 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.885861 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.988514 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.988583 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.988645 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.988676 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:06 crc kubenswrapper[4909]: I1126 07:02:06.988702 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:06Z","lastTransitionTime":"2025-11-26T07:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.092027 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.092081 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.092100 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.092127 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.092144 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.195459 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.195509 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.195522 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.195538 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.195549 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.298334 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.298387 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.298403 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.298426 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.298441 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.401410 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.401467 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.401484 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.401507 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.401523 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.504132 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.504206 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.504225 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.504252 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.504270 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.607043 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.607117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.607142 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.607173 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.607193 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.710257 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.710310 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.710325 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.710346 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.710365 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.812497 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.812559 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.812578 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.812662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.812682 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.915368 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.915434 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.915454 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.915479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:07 crc kubenswrapper[4909]: I1126 07:02:07.915498 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:07Z","lastTransitionTime":"2025-11-26T07:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.018339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.018389 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.018401 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.018419 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.018432 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.122106 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.122182 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.122201 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.122227 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.122251 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.225124 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.225201 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.225218 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.225243 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.225260 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.329229 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.329309 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.329327 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.329351 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.329370 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.431315 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.431379 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.431392 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.431411 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.431448 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.498924 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.498961 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.499108 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.499158 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:08 crc kubenswrapper[4909]: E1126 07:02:08.499206 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:08 crc kubenswrapper[4909]: E1126 07:02:08.499328 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:08 crc kubenswrapper[4909]: E1126 07:02:08.499437 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:08 crc kubenswrapper[4909]: E1126 07:02:08.499533 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.513687 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36f0eedf-d76a-4104-920a-3b2e4c4fb25b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 07:00:42.139035 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 07:00:42.143727 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2423477017/tls.crt::/tmp/serving-cert-2423477017/tls.key\\\\\\\"\\\\nI1126 07:00:48.026975 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 07:00:48.029916 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 07:00:48.029936 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 07:00:48.029963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 07:00:48.029968 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1126 07:00:48.044026 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1126 07:00:48.044366 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1126 07:00:48.044397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044401 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1126 07:00:48.044406 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1126 07:00:48.044409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1126 07:00:48.044413 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1126 07:00:48.044416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1126 07:00:48.044558 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.525358 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04477de2a79326b202a29f42e0ee9cfc00fa2fcbfa87b0389963eff25a5213c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.534144 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.534183 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.534192 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.534207 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.534221 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.535540 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pvgfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f936ab99-34dc-455d-af35-8eb813a57065\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ece91f9dd83d1293b7343d4bc993ce2165f2eb5137a83122a59f47e02724f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzv4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pvgfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.549677 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7869dc25-1c65-44bf-8a5a-6c1300c2d883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0f9272104b9d719e269d2caca7fb451bea064dc737d9387b60aa7fb8bb72bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c844a8a64e7c4b45b81b3e9b758024412d37b6ee4997299319ac07cba3a6f73e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4859c3146c691c34fd291f0d5a2954e4c4e141fb8f13b05ffe6d58c7c888c9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad94a7409de5b95acd0325171147d1c3d858341d0d3f3e2b9ecb455175e9f0b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5538c75923772dac34ceb4ab1bb0b28ddb1574502d12032a2f6b969b37001ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8402762cf977e0b801725334cdebd23af6b48856414147b4974063b4e3fc6c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d0bdd79d533cc4dd24a98da1793be777737826990c373f7e314c6c14067251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9k5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f4bjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.560771 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8llwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e91888f-077f-4be0-a258-568bde5c10bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mmf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8llwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.572144 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4cd93c3-d4ca-4805-8cf0-3e943f0225ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629fddb547bd9a924aee0ab78c4d89bda667ecc70c07ee91a9fef09eb902a3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8da52d78e27f841a13558b3820a5a27f805ce2b70982de78a97bfddf8046e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95d7d888f562d28eb6eefb5dec78c98b945fd69d7bf4a0cbd56ba8cd48b2f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.585490 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fe44782-0b65-45ec-b4fe-c714752c16e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698a6e39c834e6c9a1a357f19476559f563af9076acfe5e96aba18e4b839777e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5038b4ee485abe3ef80237252ebcb1950ce6a9659099fd4da53ac78c454ee9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.604192 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb25041e01247ed06d5e06552f52468002c56c81ebb6e5e8c0d836e38aa2d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6db17f58fbeb6b5f6278478b7a2d1080c90df81447a8caf688a4f7d84a1a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.623000 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.634423 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602939ce-1411-4a17-a42f-719afb7c6dd9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b4dcb04efbfff4cac5947e0dd5793df740e93d7eb057f760de0d3e89497e3d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9zv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4lffv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.635828 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.635871 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.635880 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.635895 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.635905 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.652953 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbfa11b9-2582-454a-9a97-63d505eccc8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:50Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI1126 07:01:50.386920 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1126 07:01:50.386920 6964 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 4.688243ms\\\\nI1126 07:01:50.386954 6964 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI1126 07:01:50.386954 6964 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 07:01:50.387016 6964 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:01:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8scj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-78qth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.664788 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7da1c699e189fb740d3cdb9229cae12ba2fea9bdcdc8dbd8ae60e4bf61c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.676924 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.685711 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87c66ecb-cdba-4731-9be5-55df0eb28303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a9952fd0e66a72b31681049758a6d1185bc2fe51c5b96915a85fd25b7bb186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c44b78e71291672502bfd11c718971b22f893bd5012fed990fdfda22776e75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkpqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:01:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52cfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.697040 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1398287c-6706-43a7-b7cc-ade07a30ccaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ace390e9902b151f7d8830c5c53c1c470c6c8d5c58f0662156cf778c70adcea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afa55e40d5f4ad2752535d54327637d1f485e8b70e86e3669b1ed787020251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab05abc1f130cd2f7019cc63bc1d6ef50935aab4bcc4c35f26d754e346a6b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066fa006685ef35acce29e291b8509828d4d789546426e94398489b1f175d4e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T07:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.706581 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.721282 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6b4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d586ea3-b189-476f-9e44-4579388f3107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T07:01:41Z\\\",\\\"message\\\":\\\"2025-11-26T07:00:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db\\\\n2025-11-26T07:00:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b3ec9ed-c733-4119-9c1f-d888216160db to /host/opt/cni/bin/\\\\n2025-11-26T07:00:56Z [verbose] multus-daemon started\\\\n2025-11-26T07:00:56Z [verbose] Readiness Indicator file check\\\\n2025-11-26T07:01:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T07:00:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cv5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6b4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.729222 4909 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-snbtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c374d623-8f62-4336-a493-7a07dabe5fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T07:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e274142b6d5fc84aa2073c95f3a28e55c50cb741e4ae155ac7d9d7d19e9e862b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T07:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94r7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T07:00:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-snbtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T07:02:08Z is after 2025-08-24T17:21:41Z" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.737703 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.737740 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.737751 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.737768 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.737780 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.840662 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.840709 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.840723 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.840740 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.840750 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.944021 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.944082 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.944099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.944123 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:08 crc kubenswrapper[4909]: I1126 07:02:08.944142 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:08Z","lastTransitionTime":"2025-11-26T07:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.047759 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.047816 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.047874 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.047986 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.048015 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.151039 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.151117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.151143 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.151176 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.151203 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.254872 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.255010 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.255033 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.255102 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.255126 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.358628 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.358673 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.358684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.358700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.358712 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.461230 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.461272 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.461281 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.461296 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.461305 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.565337 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.565454 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.565481 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.565510 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.565531 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.668381 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.668486 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.668538 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.668567 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.668585 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.770755 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.771067 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.771081 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.771097 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.771111 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.874035 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.874115 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.874134 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.874158 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.874176 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.977337 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.977432 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.977447 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.977472 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:09 crc kubenswrapper[4909]: I1126 07:02:09.977489 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:09Z","lastTransitionTime":"2025-11-26T07:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.080045 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.080084 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.080096 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.080113 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.080124 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.183417 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.183483 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.183506 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.183535 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.183557 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.286049 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.286100 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.286116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.286137 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.286154 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.389529 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.389652 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.389700 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.389760 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.389782 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.496733 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.496807 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.496832 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.496865 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.496889 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.498852 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.498884 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:10 crc kubenswrapper[4909]: E1126 07:02:10.499036 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.499587 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:10 crc kubenswrapper[4909]: E1126 07:02:10.499805 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.500061 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:10 crc kubenswrapper[4909]: E1126 07:02:10.500186 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:10 crc kubenswrapper[4909]: E1126 07:02:10.500471 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.601168 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.601276 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.601294 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.601318 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.601340 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.703854 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.703917 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.703942 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.703982 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.704003 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.807529 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.807684 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.807718 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.807751 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.807775 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.911183 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.911294 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.911317 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.911344 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:10 crc kubenswrapper[4909]: I1126 07:02:10.911362 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:10Z","lastTransitionTime":"2025-11-26T07:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.015353 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.015466 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.015489 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.015520 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.015543 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.118996 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.119108 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.119139 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.119172 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.119196 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.225713 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.225755 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.225764 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.225780 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.225789 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.329492 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.329560 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.329586 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.329646 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.329663 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.433006 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.433072 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.433090 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.433116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.433133 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.537730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.537807 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.537818 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.537833 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.537845 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.640699 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.640739 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.640747 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.640760 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.640769 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.743072 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.743145 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.743164 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.743189 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.743205 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.845985 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.846017 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.846026 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.846041 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.846051 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.881891 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:11 crc kubenswrapper[4909]: E1126 07:02:11.882115 4909 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:02:11 crc kubenswrapper[4909]: E1126 07:02:11.882282 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs podName:6e91888f-077f-4be0-a258-568bde5c10bd nodeName:}" failed. No retries permitted until 2025-11-26 07:03:15.882251418 +0000 UTC m=+168.028462614 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs") pod "network-metrics-daemon-8llwb" (UID: "6e91888f-077f-4be0-a258-568bde5c10bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.948810 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.948876 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.948893 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.948915 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:11 crc kubenswrapper[4909]: I1126 07:02:11.948931 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:11Z","lastTransitionTime":"2025-11-26T07:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.052525 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.052668 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.052696 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.052772 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.052798 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.155491 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.155548 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.155565 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.155586 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.155640 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.258515 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.258665 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.258687 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.258711 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.258728 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.361975 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.362047 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.362065 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.362089 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.362106 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.464693 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.464761 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.464790 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.464820 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.464841 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.497949 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.498074 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.498113 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:12 crc kubenswrapper[4909]: E1126 07:02:12.498275 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.498403 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:12 crc kubenswrapper[4909]: E1126 07:02:12.498564 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:12 crc kubenswrapper[4909]: E1126 07:02:12.499229 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:12 crc kubenswrapper[4909]: E1126 07:02:12.499356 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.568045 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.568125 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.568144 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.568168 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.568194 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.671005 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.671045 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.671057 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.671073 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.671085 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.774818 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.774885 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.774907 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.774937 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.774960 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.878299 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.878354 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.878370 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.878393 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.878411 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.981161 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.981265 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.981287 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.981317 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:12 crc kubenswrapper[4909]: I1126 07:02:12.981335 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:12Z","lastTransitionTime":"2025-11-26T07:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.084730 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.084789 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.084806 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.084829 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.084845 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.188297 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.188357 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.188373 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.188395 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.188416 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.291731 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.291800 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.291819 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.291840 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.291856 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.394403 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.394466 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.394483 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.394505 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.394522 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.497004 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.497050 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.497067 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.497102 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.497118 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.599963 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.600022 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.600038 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.600074 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.600091 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.702733 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.702764 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.702773 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.702786 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.702794 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.805967 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.806024 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.806043 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.806067 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.806104 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.909018 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.909068 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.909080 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.909099 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:13 crc kubenswrapper[4909]: I1126 07:02:13.909115 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:13Z","lastTransitionTime":"2025-11-26T07:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.011754 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.011801 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.011811 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.011829 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.011841 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.114999 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.115063 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.115086 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.115115 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.115136 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.217477 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.217541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.217559 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.217582 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.217629 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.320001 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.320064 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.320081 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.320110 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.320150 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.423745 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.423802 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.423822 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.423845 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.423864 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.498284 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.498435 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:14 crc kubenswrapper[4909]: E1126 07:02:14.498493 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.498283 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.498581 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:14 crc kubenswrapper[4909]: E1126 07:02:14.498800 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:14 crc kubenswrapper[4909]: E1126 07:02:14.498928 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:14 crc kubenswrapper[4909]: E1126 07:02:14.499132 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.526992 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.527068 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.527105 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.527143 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.527166 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.630074 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.630158 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.630182 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.630212 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.630234 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.734028 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.734075 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.734092 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.734118 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.734136 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.837479 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.837623 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.837645 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.837672 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.837692 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.941015 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.941093 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.941116 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.941144 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:14 crc kubenswrapper[4909]: I1126 07:02:14.941168 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:14Z","lastTransitionTime":"2025-11-26T07:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.044231 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.044298 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.044315 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.044339 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.044356 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.146876 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.146959 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.146983 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.147015 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.147035 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.249216 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.249281 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.249304 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.249330 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.249349 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.353007 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.353074 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.353092 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.353117 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.353135 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.456102 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.456192 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.456210 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.456271 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.456289 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.499930 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:02:15 crc kubenswrapper[4909]: E1126 07:02:15.500248 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-78qth_openshift-ovn-kubernetes(bbfa11b9-2582-454a-9a97-63d505eccc8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.520953 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.559253 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.559309 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.559327 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.559352 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.559370 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.662130 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.662241 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.662265 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.662295 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.662318 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.765475 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.765556 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.765580 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.765661 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.765687 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.868954 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.869019 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.869038 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.869062 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.869080 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.971933 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.972004 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.972029 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.972061 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:15 crc kubenswrapper[4909]: I1126 07:02:15.972085 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:15Z","lastTransitionTime":"2025-11-26T07:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.076484 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.076541 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.076566 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.076636 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.076661 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.179758 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.179821 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.179841 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.179864 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.179878 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.283092 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.283304 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.283563 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.283855 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.284156 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.387705 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.387775 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.387796 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.387824 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.387844 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.491017 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.491308 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.491508 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.491721 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.492116 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.500463 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.500583 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.500583 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:16 crc kubenswrapper[4909]: E1126 07:02:16.500794 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.500883 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:16 crc kubenswrapper[4909]: E1126 07:02:16.500981 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:16 crc kubenswrapper[4909]: E1126 07:02:16.501361 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:16 crc kubenswrapper[4909]: E1126 07:02:16.501584 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.595225 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.595278 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.595301 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.595330 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.595352 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.697837 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.697897 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.697917 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.697943 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.697960 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.801241 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.801292 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.801302 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.801317 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.801347 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.904506 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.904687 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.904723 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.904752 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.904774 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.952417 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.952481 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.952499 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.952523 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 07:02:16 crc kubenswrapper[4909]: I1126 07:02:16.952542 4909 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T07:02:16Z","lastTransitionTime":"2025-11-26T07:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.015425 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv"] Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.015791 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.017811 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.018146 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.018523 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.019797 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.054706 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.05468853 podStartE2EDuration="2.05468853s" podCreationTimestamp="2025-11-26 07:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.05436096 +0000 UTC m=+109.200572156" watchObservedRunningTime="2025-11-26 07:02:17.05468853 +0000 UTC m=+109.200899686" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.108443 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6b4ts" podStartSLOduration=84.108407826 podStartE2EDuration="1m24.108407826s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.093127465 +0000 UTC m=+109.239338671" watchObservedRunningTime="2025-11-26 07:02:17.108407826 +0000 UTC m=+109.254619032" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.127484 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.127452534 podStartE2EDuration="59.127452534s" podCreationTimestamp="2025-11-26 07:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.126731225 +0000 UTC m=+109.272942421" watchObservedRunningTime="2025-11-26 07:02:17.127452534 +0000 UTC m=+109.273663740" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.127834 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-snbtv" podStartSLOduration=84.127817014 podStartE2EDuration="1m24.127817014s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.110254153 +0000 UTC m=+109.256465339" watchObservedRunningTime="2025-11-26 07:02:17.127817014 +0000 UTC m=+109.274028220" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.147135 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19564e09-86e0-4814-8d0a-a6a32648334d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.147857 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/19564e09-86e0-4814-8d0a-a6a32648334d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.148091 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19564e09-86e0-4814-8d0a-a6a32648334d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.148310 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/19564e09-86e0-4814-8d0a-a6a32648334d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.148492 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19564e09-86e0-4814-8d0a-a6a32648334d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.164324 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pvgfr" podStartSLOduration=84.164294519 podStartE2EDuration="1m24.164294519s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.163799016 +0000 UTC m=+109.310010222" watchObservedRunningTime="2025-11-26 07:02:17.164294519 +0000 UTC m=+109.310505705" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.203684 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.203651249 podStartE2EDuration="1m29.203651249s" podCreationTimestamp="2025-11-26 07:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.202290713 +0000 UTC m=+109.348501889" watchObservedRunningTime="2025-11-26 07:02:17.203651249 +0000 UTC m=+109.349862415" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.204002 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f4bjn" podStartSLOduration=84.203997548 podStartE2EDuration="1m24.203997548s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.182566707 +0000 UTC m=+109.328777873" watchObservedRunningTime="2025-11-26 07:02:17.203997548 +0000 UTC m=+109.350208714" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.224766 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=43.224692409 podStartE2EDuration="43.224692409s" podCreationTimestamp="2025-11-26 07:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.223685593 +0000 UTC m=+109.369896789" watchObservedRunningTime="2025-11-26 07:02:17.224692409 +0000 UTC m=+109.370903615" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.250128 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19564e09-86e0-4814-8d0a-a6a32648334d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.250184 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/19564e09-86e0-4814-8d0a-a6a32648334d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.250207 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19564e09-86e0-4814-8d0a-a6a32648334d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.250245 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/19564e09-86e0-4814-8d0a-a6a32648334d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.250267 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19564e09-86e0-4814-8d0a-a6a32648334d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.250316 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/19564e09-86e0-4814-8d0a-a6a32648334d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.250720 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/19564e09-86e0-4814-8d0a-a6a32648334d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.251198 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19564e09-86e0-4814-8d0a-a6a32648334d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.257116 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19564e09-86e0-4814-8d0a-a6a32648334d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.275802 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19564e09-86e0-4814-8d0a-a6a32648334d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wn9sv\" (UID: \"19564e09-86e0-4814-8d0a-a6a32648334d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.321563 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podStartSLOduration=84.321544565 podStartE2EDuration="1m24.321544565s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.270205801 +0000 UTC m=+109.416416967" watchObservedRunningTime="2025-11-26 07:02:17.321544565 +0000 UTC m=+109.467755731" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.339872 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.358874 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.358849881 podStartE2EDuration="1m29.358849881s" podCreationTimestamp="2025-11-26 07:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.357774253 +0000 UTC m=+109.503985419" watchObservedRunningTime="2025-11-26 07:02:17.358849881 +0000 UTC m=+109.505061047" Nov 26 07:02:17 crc kubenswrapper[4909]: I1126 07:02:17.389709 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52cfb" podStartSLOduration=84.389689488 podStartE2EDuration="1m24.389689488s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:17.388465797 +0000 UTC m=+109.534676963" watchObservedRunningTime="2025-11-26 07:02:17.389689488 +0000 UTC m=+109.535900654" Nov 26 07:02:18 crc kubenswrapper[4909]: I1126 07:02:18.179821 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" event={"ID":"19564e09-86e0-4814-8d0a-a6a32648334d","Type":"ContainerStarted","Data":"29b4c5b9ddc1862dffa1f353e7a999d7307db18cb8b2d73726e5f2cb35dda784"} Nov 26 07:02:18 crc kubenswrapper[4909]: I1126 07:02:18.180957 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" event={"ID":"19564e09-86e0-4814-8d0a-a6a32648334d","Type":"ContainerStarted","Data":"1820c66b5abbe9520203771ea1fa6ba9ef223e1895470338d61940ae040b9390"} Nov 26 07:02:18 crc kubenswrapper[4909]: I1126 07:02:18.209793 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wn9sv" podStartSLOduration=85.209765097 podStartE2EDuration="1m25.209765097s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:18.209464109 +0000 UTC m=+110.355675345" watchObservedRunningTime="2025-11-26 07:02:18.209765097 +0000 UTC m=+110.355976293" Nov 26 07:02:18 crc kubenswrapper[4909]: I1126 07:02:18.498315 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:18 crc kubenswrapper[4909]: I1126 07:02:18.498309 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:18 crc kubenswrapper[4909]: I1126 07:02:18.498402 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:18 crc kubenswrapper[4909]: E1126 07:02:18.500208 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:18 crc kubenswrapper[4909]: I1126 07:02:18.500261 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:18 crc kubenswrapper[4909]: E1126 07:02:18.500368 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:18 crc kubenswrapper[4909]: E1126 07:02:18.500494 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:18 crc kubenswrapper[4909]: E1126 07:02:18.500693 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:20 crc kubenswrapper[4909]: I1126 07:02:20.498717 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:20 crc kubenswrapper[4909]: I1126 07:02:20.498795 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:20 crc kubenswrapper[4909]: E1126 07:02:20.499039 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:20 crc kubenswrapper[4909]: E1126 07:02:20.499185 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:20 crc kubenswrapper[4909]: I1126 07:02:20.498890 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:20 crc kubenswrapper[4909]: E1126 07:02:20.499287 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:20 crc kubenswrapper[4909]: I1126 07:02:20.499801 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:20 crc kubenswrapper[4909]: E1126 07:02:20.499903 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:22 crc kubenswrapper[4909]: I1126 07:02:22.498775 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:22 crc kubenswrapper[4909]: I1126 07:02:22.498809 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:22 crc kubenswrapper[4909]: I1126 07:02:22.498965 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:22 crc kubenswrapper[4909]: I1126 07:02:22.499232 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:22 crc kubenswrapper[4909]: E1126 07:02:22.499228 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:22 crc kubenswrapper[4909]: E1126 07:02:22.499366 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:22 crc kubenswrapper[4909]: E1126 07:02:22.499526 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:22 crc kubenswrapper[4909]: E1126 07:02:22.500021 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:24 crc kubenswrapper[4909]: I1126 07:02:24.498037 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:24 crc kubenswrapper[4909]: I1126 07:02:24.498108 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:24 crc kubenswrapper[4909]: E1126 07:02:24.498202 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:24 crc kubenswrapper[4909]: I1126 07:02:24.498240 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:24 crc kubenswrapper[4909]: I1126 07:02:24.498032 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:24 crc kubenswrapper[4909]: E1126 07:02:24.498395 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:24 crc kubenswrapper[4909]: E1126 07:02:24.498524 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:24 crc kubenswrapper[4909]: E1126 07:02:24.498711 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:26 crc kubenswrapper[4909]: I1126 07:02:26.498298 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:26 crc kubenswrapper[4909]: I1126 07:02:26.498379 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:26 crc kubenswrapper[4909]: I1126 07:02:26.498513 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:26 crc kubenswrapper[4909]: E1126 07:02:26.498501 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:26 crc kubenswrapper[4909]: I1126 07:02:26.498580 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:26 crc kubenswrapper[4909]: E1126 07:02:26.498704 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:26 crc kubenswrapper[4909]: E1126 07:02:26.498848 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:26 crc kubenswrapper[4909]: E1126 07:02:26.498940 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.219327 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/1.log" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.220085 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/0.log" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.220147 4909 generic.go:334] "Generic (PLEG): container finished" podID="3d586ea3-b189-476f-9e44-4579388f3107" containerID="e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be" exitCode=1 Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.220193 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerDied","Data":"e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be"} Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.220249 4909 scope.go:117] "RemoveContainer" containerID="a4ba07c85ca2d47d17f252605870dee4314255a5103f669720f2bb4c985a9419" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.222493 4909 scope.go:117] "RemoveContainer" containerID="e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be" Nov 26 07:02:28 crc kubenswrapper[4909]: E1126 07:02:28.223052 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6b4ts_openshift-multus(3d586ea3-b189-476f-9e44-4579388f3107)\"" pod="openshift-multus/multus-6b4ts" podUID="3d586ea3-b189-476f-9e44-4579388f3107" Nov 26 07:02:28 crc kubenswrapper[4909]: E1126 07:02:28.455994 4909 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.498745 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:28 crc kubenswrapper[4909]: E1126 07:02:28.501230 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.501553 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.501693 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:28 crc kubenswrapper[4909]: I1126 07:02:28.501790 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:28 crc kubenswrapper[4909]: E1126 07:02:28.501878 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:28 crc kubenswrapper[4909]: E1126 07:02:28.502025 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:28 crc kubenswrapper[4909]: E1126 07:02:28.502479 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:28 crc kubenswrapper[4909]: E1126 07:02:28.620258 4909 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 07:02:29 crc kubenswrapper[4909]: I1126 07:02:29.228187 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/1.log" Nov 26 07:02:30 crc kubenswrapper[4909]: I1126 07:02:30.498462 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:30 crc kubenswrapper[4909]: I1126 07:02:30.498683 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:30 crc kubenswrapper[4909]: I1126 07:02:30.498736 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:30 crc kubenswrapper[4909]: I1126 07:02:30.498777 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:30 crc kubenswrapper[4909]: E1126 07:02:30.499468 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:30 crc kubenswrapper[4909]: E1126 07:02:30.499844 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:30 crc kubenswrapper[4909]: E1126 07:02:30.500030 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:30 crc kubenswrapper[4909]: I1126 07:02:30.500037 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:02:30 crc kubenswrapper[4909]: E1126 07:02:30.500071 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:31 crc kubenswrapper[4909]: I1126 07:02:31.236878 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/3.log" Nov 26 07:02:31 crc kubenswrapper[4909]: I1126 07:02:31.239081 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerStarted","Data":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} Nov 26 07:02:31 crc kubenswrapper[4909]: I1126 07:02:31.239475 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:02:31 crc kubenswrapper[4909]: I1126 07:02:31.326335 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podStartSLOduration=98.326312749 podStartE2EDuration="1m38.326312749s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:31.26635355 +0000 UTC m=+123.412564716" watchObservedRunningTime="2025-11-26 07:02:31.326312749 +0000 UTC m=+123.472523915" Nov 26 07:02:31 crc kubenswrapper[4909]: I1126 07:02:31.326739 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8llwb"] Nov 26 07:02:31 crc kubenswrapper[4909]: I1126 07:02:31.326822 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:31 crc kubenswrapper[4909]: E1126 07:02:31.326913 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:32 crc kubenswrapper[4909]: I1126 07:02:32.498376 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:32 crc kubenswrapper[4909]: I1126 07:02:32.498473 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:32 crc kubenswrapper[4909]: E1126 07:02:32.498862 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:32 crc kubenswrapper[4909]: I1126 07:02:32.498539 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:32 crc kubenswrapper[4909]: E1126 07:02:32.498939 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:32 crc kubenswrapper[4909]: E1126 07:02:32.499137 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:33 crc kubenswrapper[4909]: I1126 07:02:33.498357 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:33 crc kubenswrapper[4909]: E1126 07:02:33.498558 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:33 crc kubenswrapper[4909]: E1126 07:02:33.620986 4909 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 07:02:34 crc kubenswrapper[4909]: I1126 07:02:34.497937 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:34 crc kubenswrapper[4909]: I1126 07:02:34.497961 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:34 crc kubenswrapper[4909]: E1126 07:02:34.498093 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:34 crc kubenswrapper[4909]: E1126 07:02:34.498175 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:34 crc kubenswrapper[4909]: I1126 07:02:34.497963 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:34 crc kubenswrapper[4909]: E1126 07:02:34.498263 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:35 crc kubenswrapper[4909]: I1126 07:02:35.498534 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:35 crc kubenswrapper[4909]: E1126 07:02:35.498759 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:36 crc kubenswrapper[4909]: I1126 07:02:36.331229 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:02:36 crc kubenswrapper[4909]: I1126 07:02:36.498121 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:36 crc kubenswrapper[4909]: E1126 07:02:36.498255 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:36 crc kubenswrapper[4909]: I1126 07:02:36.498328 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:36 crc kubenswrapper[4909]: I1126 07:02:36.498125 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:36 crc kubenswrapper[4909]: E1126 07:02:36.498429 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:36 crc kubenswrapper[4909]: E1126 07:02:36.498490 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:37 crc kubenswrapper[4909]: I1126 07:02:37.498940 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:37 crc kubenswrapper[4909]: E1126 07:02:37.499195 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:38 crc kubenswrapper[4909]: I1126 07:02:38.498503 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:38 crc kubenswrapper[4909]: I1126 07:02:38.498564 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:38 crc kubenswrapper[4909]: E1126 07:02:38.500506 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:38 crc kubenswrapper[4909]: I1126 07:02:38.500523 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:38 crc kubenswrapper[4909]: E1126 07:02:38.500641 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:38 crc kubenswrapper[4909]: E1126 07:02:38.500719 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:38 crc kubenswrapper[4909]: E1126 07:02:38.622283 4909 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 07:02:39 crc kubenswrapper[4909]: I1126 07:02:39.497929 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:39 crc kubenswrapper[4909]: E1126 07:02:39.498242 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:40 crc kubenswrapper[4909]: I1126 07:02:40.498204 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:40 crc kubenswrapper[4909]: I1126 07:02:40.498241 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:40 crc kubenswrapper[4909]: I1126 07:02:40.498359 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:40 crc kubenswrapper[4909]: E1126 07:02:40.498533 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:40 crc kubenswrapper[4909]: E1126 07:02:40.498936 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:40 crc kubenswrapper[4909]: E1126 07:02:40.499117 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:40 crc kubenswrapper[4909]: I1126 07:02:40.499216 4909 scope.go:117] "RemoveContainer" containerID="e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be" Nov 26 07:02:41 crc kubenswrapper[4909]: I1126 07:02:41.279501 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/1.log" Nov 26 07:02:41 crc kubenswrapper[4909]: I1126 07:02:41.279984 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerStarted","Data":"0adb77440dd3fcd99f6d9a0e77ab2d7cb635055d8ae27a82d06d45441a542384"} Nov 26 07:02:41 crc kubenswrapper[4909]: I1126 07:02:41.498809 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:41 crc kubenswrapper[4909]: E1126 07:02:41.499021 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:42 crc kubenswrapper[4909]: I1126 07:02:42.498536 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:42 crc kubenswrapper[4909]: I1126 07:02:42.498864 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:42 crc kubenswrapper[4909]: E1126 07:02:42.499144 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 07:02:42 crc kubenswrapper[4909]: I1126 07:02:42.499190 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:42 crc kubenswrapper[4909]: E1126 07:02:42.499303 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 07:02:42 crc kubenswrapper[4909]: E1126 07:02:42.499551 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 07:02:43 crc kubenswrapper[4909]: I1126 07:02:43.497900 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:43 crc kubenswrapper[4909]: E1126 07:02:43.498087 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8llwb" podUID="6e91888f-077f-4be0-a258-568bde5c10bd" Nov 26 07:02:44 crc kubenswrapper[4909]: I1126 07:02:44.498852 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:44 crc kubenswrapper[4909]: I1126 07:02:44.498948 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:44 crc kubenswrapper[4909]: I1126 07:02:44.498852 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:44 crc kubenswrapper[4909]: I1126 07:02:44.501643 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 26 07:02:44 crc kubenswrapper[4909]: I1126 07:02:44.501749 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 26 07:02:44 crc kubenswrapper[4909]: I1126 07:02:44.501834 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 26 07:02:44 crc kubenswrapper[4909]: I1126 07:02:44.501844 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 26 07:02:45 crc kubenswrapper[4909]: I1126 07:02:45.498467 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:02:45 crc kubenswrapper[4909]: I1126 07:02:45.500775 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 26 07:02:45 crc kubenswrapper[4909]: I1126 07:02:45.500809 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.743817 4909 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.825174 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-68dmw"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.825964 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.826142 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.826816 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.829230 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r4q2l"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.829886 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bwg2r"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.830227 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.830775 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.831294 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.831903 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.832954 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.833350 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-f7bmk"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.833548 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2qlc6"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.833985 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.834688 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.835917 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.836124 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7mqds"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.836645 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.837070 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.837282 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.849843 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4cmcz"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.850367 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4cmcz" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.854942 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.855517 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.864826 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.865717 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.866041 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.866082 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.866114 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.866374 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.866524 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.866914 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.866917 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.872701 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.881234 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.900719 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.901663 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.901913 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.901969 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902155 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902203 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902249 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902210 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902339 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902157 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902431 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902535 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902541 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902652 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.902981 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.903123 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.903939 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.904794 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.905092 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.905273 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.905445 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.905737 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.905760 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.905921 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906015 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906204 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906208 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906357 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906487 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906664 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906828 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906869 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.906955 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.907008 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.907146 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.907295 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.907487 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.907296 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.908805 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.909041 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.909283 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.911257 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.911422 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.912019 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.912096 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.912290 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.912560 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.913142 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.913253 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.913335 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.913412 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.917822 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914003 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914026 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914101 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914246 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914388 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914482 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914658 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914826 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.914931 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.915092 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.915142 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.915181 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.915275 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.915415 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.915794 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4d2g"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.918958 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.915757 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6b7\" (UniqueName: \"kubernetes.io/projected/bb0d84e2-45ac-4936-b267-d75214779f91-kube-api-access-xl6b7\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919555 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-serving-cert\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919617 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919652 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc63a259-02f9-4ab8-83ac-04baf7e15766-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919676 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919700 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919727 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/72f34a8b-b736-4737-8f2f-5471054c40f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919750 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fdcbf7-b990-48a0-8148-5fd13c5ba035-serving-cert\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919775 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f21f776-e2f4-41e5-bdb9-6639817afa17-serving-cert\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919798 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919820 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-service-ca-bundle\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919861 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qhqd\" (UniqueName: \"kubernetes.io/projected/9f87108b-bbab-4f72-a974-0cb8d188890d-kube-api-access-5qhqd\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919908 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-config\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.919952 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgd62\" (UniqueName: \"kubernetes.io/projected/dc63a259-02f9-4ab8-83ac-04baf7e15766-kube-api-access-mgd62\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920004 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zr4\" (UniqueName: \"kubernetes.io/projected/72f34a8b-b736-4737-8f2f-5471054c40f2-kube-api-access-s5zr4\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920038 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz7dd\" (UniqueName: \"kubernetes.io/projected/43fdcbf7-b990-48a0-8148-5fd13c5ba035-kube-api-access-gz7dd\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920369 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920352 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-policies\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920778 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72f34a8b-b736-4737-8f2f-5471054c40f2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920802 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543475d8-ab28-4406-83cf-ca4c0aecd157-config\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920830 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-serving-cert\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920850 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-image-import-ca\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920875 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920896 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920918 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkd5\" (UniqueName: \"kubernetes.io/projected/190a5902-e892-467f-8f9b-6d4b844cbc90-kube-api-access-2hkd5\") pod \"cluster-samples-operator-665b6dd947-cr2tp\" (UID: \"190a5902-e892-467f-8f9b-6d4b844cbc90\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920939 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72f34a8b-b736-4737-8f2f-5471054c40f2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920959 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/543475d8-ab28-4406-83cf-ca4c0aecd157-serving-cert\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.920980 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/543475d8-ab28-4406-83cf-ca4c0aecd157-trusted-ca\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921003 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-etcd-serving-ca\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921026 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnkf2\" (UniqueName: \"kubernetes.io/projected/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-kube-api-access-cnkf2\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921046 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f87108b-bbab-4f72-a974-0cb8d188890d-node-pullsecrets\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921105 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921141 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc63a259-02f9-4ab8-83ac-04baf7e15766-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921199 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-etcd-client\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921401 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921450 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-config\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921478 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-service-ca\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921512 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-encryption-config\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921544 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/190a5902-e892-467f-8f9b-6d4b844cbc90-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cr2tp\" (UID: \"190a5902-e892-467f-8f9b-6d4b844cbc90\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921572 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-config\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921633 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-config\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921662 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-oauth-config\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921685 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-audit\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921712 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f87108b-bbab-4f72-a974-0cb8d188890d-audit-dir\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921750 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-dir\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921760 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921781 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921821 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb0d84e2-45ac-4936-b267-d75214779f91-serving-cert\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921857 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-client-ca\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921883 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfgxr\" (UniqueName: \"kubernetes.io/projected/0f21f776-e2f4-41e5-bdb9-6639817afa17-kube-api-access-dfgxr\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921913 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921940 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-trusted-ca-bundle\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.921980 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.922017 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvq6\" (UniqueName: \"kubernetes.io/projected/543475d8-ab28-4406-83cf-ca4c0aecd157-kube-api-access-rbvq6\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.922060 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.922091 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.922120 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.922152 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-oauth-serving-cert\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.922181 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v5j8\" (UniqueName: \"kubernetes.io/projected/36375488-d0da-488c-b0ac-1e4f63490cbd-kube-api-access-6v5j8\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.922213 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bb0d84e2-45ac-4936-b267-d75214779f91-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.923808 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.924022 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.924151 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.924250 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.925248 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.927693 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zpp77"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.928095 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.928375 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.928585 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.930173 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.931683 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.942621 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.943053 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.943623 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.943888 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.944194 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.945469 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.946031 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.946492 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.959534 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.959902 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.963302 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.963752 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.964930 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wqlmg"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.965523 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.966487 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.966659 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.966880 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.967018 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.967042 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.967181 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.967465 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.968329 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.968976 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.969421 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm9vz"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.969964 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.971004 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.972338 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.974945 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7zgjj"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.974981 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.975556 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.976338 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.976612 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.976632 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.976614 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.977391 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.977759 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.978560 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.982498 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6sfv"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.983007 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.983959 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.984296 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.984890 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.985570 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.988003 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.988360 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.988712 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.988874 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.990130 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgf4b"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.990756 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.990970 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.994044 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.994714 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.997123 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r4q2l"] Nov 26 07:02:47 crc kubenswrapper[4909]: I1126 07:02:47.997786 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-68dmw"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.002182 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.003427 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.004566 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.006833 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.007398 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.007838 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.010675 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.012093 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-psqx2"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.012706 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.012903 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.013171 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.013937 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.014068 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.015523 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bwg2r"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.017038 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.021962 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2qlc6"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.022851 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-config\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.022890 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-service-ca\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.022914 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-etcd-client\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.022930 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.022952 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/190a5902-e892-467f-8f9b-6d4b844cbc90-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cr2tp\" (UID: \"190a5902-e892-467f-8f9b-6d4b844cbc90\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.022973 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-encryption-config\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.022994 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-config\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023013 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-config\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023035 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsf9\" (UniqueName: \"kubernetes.io/projected/cd48299d-0c3f-4475-b4f5-a00d85b71393-kube-api-access-5fsf9\") pod \"downloads-7954f5f757-4cmcz\" (UID: \"cd48299d-0c3f-4475-b4f5-a00d85b71393\") " pod="openshift-console/downloads-7954f5f757-4cmcz" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023062 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-oauth-config\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023081 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-audit\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023101 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f87108b-bbab-4f72-a974-0cb8d188890d-audit-dir\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023119 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-dir\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023140 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023163 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfck9\" (UniqueName: \"kubernetes.io/projected/c1f4c3b4-d536-424d-aecc-c1ea2228940f-kube-api-access-gfck9\") pod \"dns-operator-744455d44c-zpp77\" (UID: \"c1f4c3b4-d536-424d-aecc-c1ea2228940f\") " pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023185 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e89c18c-706c-4894-a472-01f259f2c854-config\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023209 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2nwn\" (UniqueName: \"kubernetes.io/projected/480e0e98-6e8e-480e-bf79-fa4d6cba6582-kube-api-access-l2nwn\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023229 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxj2\" (UniqueName: \"kubernetes.io/projected/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-kube-api-access-pqxj2\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023251 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb0d84e2-45ac-4936-b267-d75214779f91-serving-cert\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023274 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-client-ca\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023292 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfgxr\" (UniqueName: \"kubernetes.io/projected/0f21f776-e2f4-41e5-bdb9-6639817afa17-kube-api-access-dfgxr\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023310 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1f4c3b4-d536-424d-aecc-c1ea2228940f-metrics-tls\") pod \"dns-operator-744455d44c-zpp77\" (UID: \"c1f4c3b4-d536-424d-aecc-c1ea2228940f\") " pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023344 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023364 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-trusted-ca-bundle\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023380 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-images\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023402 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023422 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvq6\" (UniqueName: \"kubernetes.io/projected/543475d8-ab28-4406-83cf-ca4c0aecd157-kube-api-access-rbvq6\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023441 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023458 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e89c18c-706c-4894-a472-01f259f2c854-auth-proxy-config\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023479 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023499 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-encryption-config\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023521 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4b01fa-600c-4784-877e-affbde07fb1d-config\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023540 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023559 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023578 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxdz\" (UniqueName: \"kubernetes.io/projected/97e5a116-5615-4290-bee9-44f45f2433df-kube-api-access-8fxdz\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023614 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-oauth-serving-cert\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023637 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bb0d84e2-45ac-4936-b267-d75214779f91-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023655 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v5j8\" (UniqueName: \"kubernetes.io/projected/36375488-d0da-488c-b0ac-1e4f63490cbd-kube-api-access-6v5j8\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023678 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023701 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl6b7\" (UniqueName: \"kubernetes.io/projected/bb0d84e2-45ac-4936-b267-d75214779f91-kube-api-access-xl6b7\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023731 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-serving-cert\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023754 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4b01fa-600c-4784-877e-affbde07fb1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023801 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023829 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-etcd-client\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023848 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-serving-cert\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023868 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4f9n\" (UniqueName: \"kubernetes.io/projected/9e89c18c-706c-4894-a472-01f259f2c854-kube-api-access-k4f9n\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023890 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023909 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc63a259-02f9-4ab8-83ac-04baf7e15766-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023927 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023951 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/72f34a8b-b736-4737-8f2f-5471054c40f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023972 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fdcbf7-b990-48a0-8148-5fd13c5ba035-serving-cert\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.023992 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f21f776-e2f4-41e5-bdb9-6639817afa17-serving-cert\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024012 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024029 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-service-ca-bundle\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024050 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qhqd\" (UniqueName: \"kubernetes.io/projected/9f87108b-bbab-4f72-a974-0cb8d188890d-kube-api-access-5qhqd\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024069 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e4b01fa-600c-4784-877e-affbde07fb1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024099 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-config\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024116 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-config\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024136 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgd62\" (UniqueName: \"kubernetes.io/projected/dc63a259-02f9-4ab8-83ac-04baf7e15766-kube-api-access-mgd62\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024158 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-client-ca\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024180 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5zr4\" (UniqueName: \"kubernetes.io/projected/72f34a8b-b736-4737-8f2f-5471054c40f2-kube-api-access-s5zr4\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024199 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz7dd\" (UniqueName: \"kubernetes.io/projected/43fdcbf7-b990-48a0-8148-5fd13c5ba035-kube-api-access-gz7dd\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024215 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-serving-cert\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024234 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk492\" (UniqueName: \"kubernetes.io/projected/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-kube-api-access-jk492\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024256 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480e0e98-6e8e-480e-bf79-fa4d6cba6582-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024278 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-policies\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024295 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72f34a8b-b736-4737-8f2f-5471054c40f2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024316 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543475d8-ab28-4406-83cf-ca4c0aecd157-config\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024335 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/480e0e98-6e8e-480e-bf79-fa4d6cba6582-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024354 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024373 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-serving-cert\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024393 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-image-import-ca\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024412 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9e89c18c-706c-4894-a472-01f259f2c854-machine-approver-tls\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024434 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024431 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9f87108b-bbab-4f72-a974-0cb8d188890d-audit-dir\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024453 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024447 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bb0d84e2-45ac-4936-b267-d75214779f91-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024523 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-audit-policies\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024719 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-config\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.024782 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-dir\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.026628 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-trusted-ca-bundle\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.026706 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.027155 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-service-ca\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.029100 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.030029 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72f34a8b-b736-4737-8f2f-5471054c40f2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.030074 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-etcd-client\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.030653 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.030932 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.031225 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.032626 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/543475d8-ab28-4406-83cf-ca4c0aecd157-serving-cert\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.032678 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/543475d8-ab28-4406-83cf-ca4c0aecd157-trusted-ca\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.032713 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-etcd-serving-ca\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.033889 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-audit\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.034097 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-policies\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.035658 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-encryption-config\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.035781 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f87108b-bbab-4f72-a974-0cb8d188890d-serving-cert\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036096 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hkd5\" (UniqueName: \"kubernetes.io/projected/190a5902-e892-467f-8f9b-6d4b844cbc90-kube-api-access-2hkd5\") pod \"cluster-samples-operator-665b6dd947-cr2tp\" (UID: \"190a5902-e892-467f-8f9b-6d4b844cbc90\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036227 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-config\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036338 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnkf2\" (UniqueName: \"kubernetes.io/projected/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-kube-api-access-cnkf2\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036485 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f87108b-bbab-4f72-a974-0cb8d188890d-node-pullsecrets\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036839 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036896 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036982 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72f34a8b-b736-4737-8f2f-5471054c40f2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.037559 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-service-ca-bundle\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.037810 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-image-import-ca\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.037914 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9f87108b-bbab-4f72-a974-0cb8d188890d-node-pullsecrets\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.037939 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-client-ca\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.036293 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.038082 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.038768 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fdcbf7-b990-48a0-8148-5fd13c5ba035-serving-cert\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.038832 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.038914 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-etcd-serving-ca\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.038983 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc63a259-02f9-4ab8-83ac-04baf7e15766-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.039056 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97e5a116-5615-4290-bee9-44f45f2433df-audit-dir\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.039075 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-config\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.040616 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.041235 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43fdcbf7-b990-48a0-8148-5fd13c5ba035-config\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.041642 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.041939 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.041929 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.042032 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/543475d8-ab28-4406-83cf-ca4c0aecd157-trusted-ca\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.042149 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-config\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.042389 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.042508 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-oauth-serving-cert\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.042690 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/72f34a8b-b736-4737-8f2f-5471054c40f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.033702 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f87108b-bbab-4f72-a974-0cb8d188890d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.043088 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc63a259-02f9-4ab8-83ac-04baf7e15766-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.043091 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f21f776-e2f4-41e5-bdb9-6639817afa17-serving-cert\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.043394 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/543475d8-ab28-4406-83cf-ca4c0aecd157-serving-cert\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.043572 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb0d84e2-45ac-4936-b267-d75214779f91-serving-cert\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.043734 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/190a5902-e892-467f-8f9b-6d4b844cbc90-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cr2tp\" (UID: \"190a5902-e892-467f-8f9b-6d4b844cbc90\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.043871 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-oauth-config\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.043990 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fbn7q"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.044832 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.045791 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm9vz"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.045889 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543475d8-ab28-4406-83cf-ca4c0aecd157-config\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.046463 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.046805 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc63a259-02f9-4ab8-83ac-04baf7e15766-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.046991 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-serving-cert\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.047888 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.049307 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4cmcz"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.051925 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f7bmk"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.051964 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.053436 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.054572 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zpp77"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.055858 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-59zk5"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.056423 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.057160 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.058635 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xt76w"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.059612 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.061157 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.062146 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7mqds"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.063787 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.065651 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.067452 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.069033 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.070761 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.072151 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.074040 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.074411 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.076014 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.076920 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4d2g"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.078384 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.081942 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.084202 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.085419 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.086675 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xt76w"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.087897 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgf4b"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.089199 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.091604 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.093667 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wqlmg"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.095459 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6sfv"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.097013 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xhcl5"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.097599 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.099910 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-r8xvl"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.101365 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.101687 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.103025 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fbn7q"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.104215 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-psqx2"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.105426 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xhcl5"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.106756 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r8xvl"] Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.107848 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.127981 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146209 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97e5a116-5615-4290-bee9-44f45f2433df-audit-dir\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146369 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fsf9\" (UniqueName: \"kubernetes.io/projected/cd48299d-0c3f-4475-b4f5-a00d85b71393-kube-api-access-5fsf9\") pod \"downloads-7954f5f757-4cmcz\" (UID: \"cd48299d-0c3f-4475-b4f5-a00d85b71393\") " pod="openshift-console/downloads-7954f5f757-4cmcz" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146385 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97e5a116-5615-4290-bee9-44f45f2433df-audit-dir\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146421 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfck9\" (UniqueName: \"kubernetes.io/projected/c1f4c3b4-d536-424d-aecc-c1ea2228940f-kube-api-access-gfck9\") pod \"dns-operator-744455d44c-zpp77\" (UID: \"c1f4c3b4-d536-424d-aecc-c1ea2228940f\") " pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146454 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e89c18c-706c-4894-a472-01f259f2c854-config\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146483 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2nwn\" (UniqueName: \"kubernetes.io/projected/480e0e98-6e8e-480e-bf79-fa4d6cba6582-kube-api-access-l2nwn\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146519 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqxj2\" (UniqueName: \"kubernetes.io/projected/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-kube-api-access-pqxj2\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146579 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1f4c3b4-d536-424d-aecc-c1ea2228940f-metrics-tls\") pod \"dns-operator-744455d44c-zpp77\" (UID: \"c1f4c3b4-d536-424d-aecc-c1ea2228940f\") " pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146659 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-images\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146748 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e89c18c-706c-4894-a472-01f259f2c854-auth-proxy-config\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.146778 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.147515 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-encryption-config\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148029 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e89c18c-706c-4894-a472-01f259f2c854-auth-proxy-config\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148134 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-images\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148142 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4b01fa-600c-4784-877e-affbde07fb1d-config\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148219 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxdz\" (UniqueName: \"kubernetes.io/projected/97e5a116-5615-4290-bee9-44f45f2433df-kube-api-access-8fxdz\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148277 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-plugins-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148289 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148304 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfxzn\" (UniqueName: \"kubernetes.io/projected/4047b0cd-e695-4486-a4d9-705b6e9863f2-kube-api-access-tfxzn\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148402 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-images\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148436 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4047b0cd-e695-4486-a4d9-705b6e9863f2-serving-cert\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148470 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-apiservice-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148466 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e89c18c-706c-4894-a472-01f259f2c854-config\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148515 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148674 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4b01fa-600c-4784-877e-affbde07fb1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148693 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148721 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-mountpoint-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148778 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2bd06321-043f-48ec-a6d7-b19de03ffbf6-proxy-tls\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148820 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-etcd-client\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148856 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-serving-cert\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148876 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4b01fa-600c-4784-877e-affbde07fb1d-config\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.148890 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4f9n\" (UniqueName: \"kubernetes.io/projected/9e89c18c-706c-4894-a472-01f259f2c854-kube-api-access-k4f9n\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.149041 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.149085 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8pst\" (UniqueName: \"kubernetes.io/projected/0422a643-8fdb-4a70-b120-182517c46a6c-kube-api-access-v8pst\") pod \"package-server-manager-789f6589d5-tsxsb\" (UID: \"0422a643-8fdb-4a70-b120-182517c46a6c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.149181 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-registration-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.150085 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1f4c3b4-d536-424d-aecc-c1ea2228940f-metrics-tls\") pod \"dns-operator-744455d44c-zpp77\" (UID: \"c1f4c3b4-d536-424d-aecc-c1ea2228940f\") " pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.150404 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-csi-data-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.150461 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2bd06321-043f-48ec-a6d7-b19de03ffbf6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.150522 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e4b01fa-600c-4784-877e-affbde07fb1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.150551 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-config\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.150573 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-socket-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.150648 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpvz\" (UniqueName: \"kubernetes.io/projected/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-kube-api-access-wkpvz\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.151380 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-client-ca\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.151425 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb695\" (UniqueName: \"kubernetes.io/projected/710c8c8c-988b-4d7c-b91d-52724634b484-kube-api-access-cb695\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.151533 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-serving-cert\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.151679 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk492\" (UniqueName: \"kubernetes.io/projected/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-kube-api-access-jk492\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.151786 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480e0e98-6e8e-480e-bf79-fa4d6cba6582-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.151868 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-proxy-tls\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.151948 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/480e0e98-6e8e-480e-bf79-fa4d6cba6582-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152019 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152090 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-webhook-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152196 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9e89c18c-706c-4894-a472-01f259f2c854-machine-approver-tls\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152279 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-client-ca\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152381 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2cd6\" (UniqueName: \"kubernetes.io/projected/2bd06321-043f-48ec-a6d7-b19de03ffbf6-kube-api-access-f2cd6\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152457 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-audit-policies\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152099 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-serving-cert\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152586 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/710c8c8c-988b-4d7c-b91d-52724634b484-tmpfs\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152705 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-config\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152801 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4047b0cd-e695-4486-a4d9-705b6e9863f2-config\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152876 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0422a643-8fdb-4a70-b120-182517c46a6c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tsxsb\" (UID: \"0422a643-8fdb-4a70-b120-182517c46a6c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152269 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-config\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152619 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.152834 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480e0e98-6e8e-480e-bf79-fa4d6cba6582-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.153067 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvmpk\" (UniqueName: \"kubernetes.io/projected/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-kube-api-access-xvmpk\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.153411 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-audit-policies\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.153318 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97e5a116-5615-4290-bee9-44f45f2433df-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.153423 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e4b01fa-600c-4784-877e-affbde07fb1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.153840 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-encryption-config\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.154114 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-config\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.154527 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-serving-cert\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.155153 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/480e0e98-6e8e-480e-bf79-fa4d6cba6582-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.155576 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97e5a116-5615-4290-bee9-44f45f2433df-etcd-client\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.155864 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9e89c18c-706c-4894-a472-01f259f2c854-machine-approver-tls\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.169041 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.188206 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.208858 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.256453 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2cd6\" (UniqueName: \"kubernetes.io/projected/2bd06321-043f-48ec-a6d7-b19de03ffbf6-kube-api-access-f2cd6\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.256612 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.256611 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/710c8c8c-988b-4d7c-b91d-52724634b484-tmpfs\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.256774 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.256814 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4047b0cd-e695-4486-a4d9-705b6e9863f2-config\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.256841 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0422a643-8fdb-4a70-b120-182517c46a6c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tsxsb\" (UID: \"0422a643-8fdb-4a70-b120-182517c46a6c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.256865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvmpk\" (UniqueName: \"kubernetes.io/projected/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-kube-api-access-xvmpk\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257003 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-plugins-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257028 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfxzn\" (UniqueName: \"kubernetes.io/projected/4047b0cd-e695-4486-a4d9-705b6e9863f2-kube-api-access-tfxzn\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257061 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-images\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257081 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4047b0cd-e695-4486-a4d9-705b6e9863f2-serving-cert\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257102 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-apiservice-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257136 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-mountpoint-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257167 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2bd06321-043f-48ec-a6d7-b19de03ffbf6-proxy-tls\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257198 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257218 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8pst\" (UniqueName: \"kubernetes.io/projected/0422a643-8fdb-4a70-b120-182517c46a6c-kube-api-access-v8pst\") pod \"package-server-manager-789f6589d5-tsxsb\" (UID: \"0422a643-8fdb-4a70-b120-182517c46a6c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257236 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-registration-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257253 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-csi-data-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257273 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2bd06321-043f-48ec-a6d7-b19de03ffbf6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257301 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-socket-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257325 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkpvz\" (UniqueName: \"kubernetes.io/projected/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-kube-api-access-wkpvz\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257374 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb695\" (UniqueName: \"kubernetes.io/projected/710c8c8c-988b-4d7c-b91d-52724634b484-kube-api-access-cb695\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257424 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-proxy-tls\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257436 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-mountpoint-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257450 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-webhook-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257514 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-plugins-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257580 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-socket-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257846 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-registration-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.257918 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-csi-data-dir\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.258307 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2bd06321-043f-48ec-a6d7-b19de03ffbf6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.258413 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/710c8c8c-988b-4d7c-b91d-52724634b484-tmpfs\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.262439 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.268969 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.287945 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.308673 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.327498 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.348533 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.369471 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.388564 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.408789 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.429359 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.455659 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.468161 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.487812 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.508395 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.528776 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.550353 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.570143 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.589295 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.601478 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2bd06321-043f-48ec-a6d7-b19de03ffbf6-proxy-tls\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.608140 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.628808 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.648065 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.667975 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.698337 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.707651 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.728240 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.748949 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.768856 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.789048 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.809351 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.829070 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.848896 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.868805 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.889091 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.908581 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.928345 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.948768 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.968584 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 26 07:02:48 crc kubenswrapper[4909]: I1126 07:02:48.988891 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.006852 4909 request.go:700] Waited for 1.015611995s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0 Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.009152 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.028420 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.048767 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.069458 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.088520 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.108485 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.128450 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.148965 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.170178 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.187933 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.208578 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.228119 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.248239 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.257924 4909 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.257982 4909 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258019 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-proxy-tls podName:deb4b7f7-1494-4289-8ac8-fbfcef6c76e0 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:49.757989988 +0000 UTC m=+141.904201194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-proxy-tls") pod "machine-config-operator-74547568cd-p4gdd" (UID: "deb4b7f7-1494-4289-8ac8-fbfcef6c76e0") : failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258069 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-webhook-cert podName:710c8c8c-988b-4d7c-b91d-52724634b484 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:49.758039009 +0000 UTC m=+141.904250215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-webhook-cert") pod "packageserver-d55dfcdfc-pgsgg" (UID: "710c8c8c-988b-4d7c-b91d-52724634b484") : failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258129 4909 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258193 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-apiservice-cert podName:710c8c8c-988b-4d7c-b91d-52724634b484 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:49.758173072 +0000 UTC m=+141.904384278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-apiservice-cert") pod "packageserver-d55dfcdfc-pgsgg" (UID: "710c8c8c-988b-4d7c-b91d-52724634b484") : failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258266 4909 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258507 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4047b0cd-e695-4486-a4d9-705b6e9863f2-config podName:4047b0cd-e695-4486-a4d9-705b6e9863f2 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:49.75848021 +0000 UTC m=+141.904691416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4047b0cd-e695-4486-a4d9-705b6e9863f2-config") pod "service-ca-operator-777779d784-psqx2" (UID: "4047b0cd-e695-4486-a4d9-705b6e9863f2") : failed to sync configmap cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258649 4909 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258710 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4047b0cd-e695-4486-a4d9-705b6e9863f2-serving-cert podName:4047b0cd-e695-4486-a4d9-705b6e9863f2 nodeName:}" failed. No retries permitted until 2025-11-26 07:02:49.758691096 +0000 UTC m=+141.904902372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4047b0cd-e695-4486-a4d9-705b6e9863f2-serving-cert") pod "service-ca-operator-777779d784-psqx2" (UID: "4047b0cd-e695-4486-a4d9-705b6e9863f2") : failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258747 4909 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: E1126 07:02:49.258801 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0422a643-8fdb-4a70-b120-182517c46a6c-package-server-manager-serving-cert podName:0422a643-8fdb-4a70-b120-182517c46a6c nodeName:}" failed. No retries permitted until 2025-11-26 07:02:49.758784748 +0000 UTC m=+141.904995954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0422a643-8fdb-4a70-b120-182517c46a6c-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-tsxsb" (UID: "0422a643-8fdb-4a70-b120-182517c46a6c") : failed to sync secret cache: timed out waiting for the condition Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.259505 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-images\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.268968 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.288929 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.309542 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.328370 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.348685 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.369316 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.389882 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.409906 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.428673 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.448570 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.469218 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.488414 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.541697 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbvq6\" (UniqueName: \"kubernetes.io/projected/543475d8-ab28-4406-83cf-ca4c0aecd157-kube-api-access-rbvq6\") pod \"console-operator-58897d9998-r4q2l\" (UID: \"543475d8-ab28-4406-83cf-ca4c0aecd157\") " pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.553621 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl6b7\" (UniqueName: \"kubernetes.io/projected/bb0d84e2-45ac-4936-b267-d75214779f91-kube-api-access-xl6b7\") pod \"openshift-config-operator-7777fb866f-dn5tv\" (UID: \"bb0d84e2-45ac-4936-b267-d75214779f91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.576742 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v5j8\" (UniqueName: \"kubernetes.io/projected/36375488-d0da-488c-b0ac-1e4f63490cbd-kube-api-access-6v5j8\") pod \"oauth-openshift-558db77b4-68dmw\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.586361 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz7dd\" (UniqueName: \"kubernetes.io/projected/43fdcbf7-b990-48a0-8148-5fd13c5ba035-kube-api-access-gz7dd\") pod \"authentication-operator-69f744f599-bwg2r\" (UID: \"43fdcbf7-b990-48a0-8148-5fd13c5ba035\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.600661 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgd62\" (UniqueName: \"kubernetes.io/projected/dc63a259-02f9-4ab8-83ac-04baf7e15766-kube-api-access-mgd62\") pod \"openshift-controller-manager-operator-756b6f6bc6-mdwd4\" (UID: \"dc63a259-02f9-4ab8-83ac-04baf7e15766\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.620486 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72f34a8b-b736-4737-8f2f-5471054c40f2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.641500 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5zr4\" (UniqueName: \"kubernetes.io/projected/72f34a8b-b736-4737-8f2f-5471054c40f2-kube-api-access-s5zr4\") pod \"cluster-image-registry-operator-dc59b4c8b-tsxh4\" (UID: \"72f34a8b-b736-4737-8f2f-5471054c40f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.650503 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.660938 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.663481 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfgxr\" (UniqueName: \"kubernetes.io/projected/0f21f776-e2f4-41e5-bdb9-6639817afa17-kube-api-access-dfgxr\") pod \"controller-manager-879f6c89f-2qlc6\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.675120 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.686780 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qhqd\" (UniqueName: \"kubernetes.io/projected/9f87108b-bbab-4f72-a974-0cb8d188890d-kube-api-access-5qhqd\") pod \"apiserver-76f77b778f-7mqds\" (UID: \"9f87108b-bbab-4f72-a974-0cb8d188890d\") " pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.704857 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hkd5\" (UniqueName: \"kubernetes.io/projected/190a5902-e892-467f-8f9b-6d4b844cbc90-kube-api-access-2hkd5\") pod \"cluster-samples-operator-665b6dd947-cr2tp\" (UID: \"190a5902-e892-467f-8f9b-6d4b844cbc90\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.705021 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.729017 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.734908 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.736723 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnkf2\" (UniqueName: \"kubernetes.io/projected/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-kube-api-access-cnkf2\") pod \"console-f9d7485db-f7bmk\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.750288 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.768831 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.768982 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.780531 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4047b0cd-e695-4486-a4d9-705b6e9863f2-config\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.780617 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0422a643-8fdb-4a70-b120-182517c46a6c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tsxsb\" (UID: \"0422a643-8fdb-4a70-b120-182517c46a6c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.780791 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-apiservice-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.780818 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4047b0cd-e695-4486-a4d9-705b6e9863f2-serving-cert\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.780952 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-proxy-tls\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.780977 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-webhook-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.781401 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4047b0cd-e695-4486-a4d9-705b6e9863f2-config\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.782835 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.784580 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-proxy-tls\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.785057 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-webhook-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.786115 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/710c8c8c-988b-4d7c-b91d-52724634b484-apiservice-cert\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.786498 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4047b0cd-e695-4486-a4d9-705b6e9863f2-serving-cert\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.788166 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0422a643-8fdb-4a70-b120-182517c46a6c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tsxsb\" (UID: \"0422a643-8fdb-4a70-b120-182517c46a6c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.788721 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.797307 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.807001 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.809544 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.815945 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.830587 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.852356 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.862469 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-68dmw"] Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.868662 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.889671 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.909015 4909 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.928847 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 26 07:02:49 crc kubenswrapper[4909]: I1126 07:02:49.988043 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.007093 4909 request.go:700] Waited for 1.909287416s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.008319 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.028869 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.051246 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.061170 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f7bmk"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.069865 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.088824 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.111066 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.129957 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.131217 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.134821 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.145647 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fsf9\" (UniqueName: \"kubernetes.io/projected/cd48299d-0c3f-4475-b4f5-a00d85b71393-kube-api-access-5fsf9\") pod \"downloads-7954f5f757-4cmcz\" (UID: \"cd48299d-0c3f-4475-b4f5-a00d85b71393\") " pod="openshift-console/downloads-7954f5f757-4cmcz" Nov 26 07:02:50 crc kubenswrapper[4909]: W1126 07:02:50.147816 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb0d84e2_45ac_4936_b267_d75214779f91.slice/crio-d7121249b8031761f44f35bdb3a8915c0ca9cd76f1a5a43645057872cb7d021f WatchSource:0}: Error finding container d7121249b8031761f44f35bdb3a8915c0ca9cd76f1a5a43645057872cb7d021f: Status 404 returned error can't find the container with id d7121249b8031761f44f35bdb3a8915c0ca9cd76f1a5a43645057872cb7d021f Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.163363 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqxj2\" (UniqueName: \"kubernetes.io/projected/56714b37-2c6a-42d6-8f7f-c8302a61bd6f-kube-api-access-pqxj2\") pod \"machine-api-operator-5694c8668f-p4d2g\" (UID: \"56714b37-2c6a-42d6-8f7f-c8302a61bd6f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.183548 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfck9\" (UniqueName: \"kubernetes.io/projected/c1f4c3b4-d536-424d-aecc-c1ea2228940f-kube-api-access-gfck9\") pod \"dns-operator-744455d44c-zpp77\" (UID: \"c1f4c3b4-d536-424d-aecc-c1ea2228940f\") " pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.203543 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.208554 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r4q2l"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.209857 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2nwn\" (UniqueName: \"kubernetes.io/projected/480e0e98-6e8e-480e-bf79-fa4d6cba6582-kube-api-access-l2nwn\") pod \"openshift-apiserver-operator-796bbdcf4f-4xv5r\" (UID: \"480e0e98-6e8e-480e-bf79-fa4d6cba6582\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:50 crc kubenswrapper[4909]: W1126 07:02:50.219511 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod543475d8_ab28_4406_83cf_ca4c0aecd157.slice/crio-0dfb7bc40ccbb1e41a939b18c4810e75791e37b98a47fab63790a07bef078947 WatchSource:0}: Error finding container 0dfb7bc40ccbb1e41a939b18c4810e75791e37b98a47fab63790a07bef078947: Status 404 returned error can't find the container with id 0dfb7bc40ccbb1e41a939b18c4810e75791e37b98a47fab63790a07bef078947 Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.223668 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.226076 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.229513 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxdz\" (UniqueName: \"kubernetes.io/projected/97e5a116-5615-4290-bee9-44f45f2433df-kube-api-access-8fxdz\") pod \"apiserver-7bbb656c7d-xmhvc\" (UID: \"97e5a116-5615-4290-bee9-44f45f2433df\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.232661 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bwg2r"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.245392 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4b01fa-600c-4784-877e-affbde07fb1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rpddn\" (UID: \"7e4b01fa-600c-4784-877e-affbde07fb1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:50 crc kubenswrapper[4909]: W1126 07:02:50.253478 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fdcbf7_b990_48a0_8148_5fd13c5ba035.slice/crio-bcb8214a8bac103469379a7db994d8623b9c41a3b3c7e7ef84fd2ec215205738 WatchSource:0}: Error finding container bcb8214a8bac103469379a7db994d8623b9c41a3b3c7e7ef84fd2ec215205738: Status 404 returned error can't find the container with id bcb8214a8bac103469379a7db994d8623b9c41a3b3c7e7ef84fd2ec215205738 Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.263880 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4f9n\" (UniqueName: \"kubernetes.io/projected/9e89c18c-706c-4894-a472-01f259f2c854-kube-api-access-k4f9n\") pod \"machine-approver-56656f9798-nxb7l\" (UID: \"9e89c18c-706c-4894-a472-01f259f2c854\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.281708 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk492\" (UniqueName: \"kubernetes.io/projected/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-kube-api-access-jk492\") pod \"route-controller-manager-6576b87f9c-c4h29\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.304490 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2cd6\" (UniqueName: \"kubernetes.io/projected/2bd06321-043f-48ec-a6d7-b19de03ffbf6-kube-api-access-f2cd6\") pod \"machine-config-controller-84d6567774-klzpc\" (UID: \"2bd06321-043f-48ec-a6d7-b19de03ffbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.329454 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" event={"ID":"72f34a8b-b736-4737-8f2f-5471054c40f2","Type":"ContainerStarted","Data":"e073da77e6a1d2402ad4ebb08a11f510b8c9b658c01f53bcae6a0d768bbe0110"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.336043 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfxzn\" (UniqueName: \"kubernetes.io/projected/4047b0cd-e695-4486-a4d9-705b6e9863f2-kube-api-access-tfxzn\") pod \"service-ca-operator-777779d784-psqx2\" (UID: \"4047b0cd-e695-4486-a4d9-705b6e9863f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.343841 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7mqds"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.354886 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2qlc6"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.361285 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvmpk\" (UniqueName: \"kubernetes.io/projected/deb4b7f7-1494-4289-8ac8-fbfcef6c76e0-kube-api-access-xvmpk\") pod \"machine-config-operator-74547568cd-p4gdd\" (UID: \"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.365039 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" event={"ID":"43fdcbf7-b990-48a0-8148-5fd13c5ba035","Type":"ContainerStarted","Data":"bcb8214a8bac103469379a7db994d8623b9c41a3b3c7e7ef84fd2ec215205738"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.367190 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" event={"ID":"543475d8-ab28-4406-83cf-ca4c0aecd157","Type":"ContainerStarted","Data":"0dfb7bc40ccbb1e41a939b18c4810e75791e37b98a47fab63790a07bef078947"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.369183 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" event={"ID":"bb0d84e2-45ac-4936-b267-d75214779f91","Type":"ContainerStarted","Data":"d7121249b8031761f44f35bdb3a8915c0ca9cd76f1a5a43645057872cb7d021f"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.370132 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8pst\" (UniqueName: \"kubernetes.io/projected/0422a643-8fdb-4a70-b120-182517c46a6c-kube-api-access-v8pst\") pod \"package-server-manager-789f6589d5-tsxsb\" (UID: \"0422a643-8fdb-4a70-b120-182517c46a6c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.371101 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" event={"ID":"dc63a259-02f9-4ab8-83ac-04baf7e15766","Type":"ContainerStarted","Data":"ca2fb7670bf3cb8877b4b40df8912b325b891e2bf0e5a4c289cf6d16e840d2f9"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.379514 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f7bmk" event={"ID":"32133cc3-d6eb-48c5-a3fc-11e820ed8a48","Type":"ContainerStarted","Data":"01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.379563 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f7bmk" event={"ID":"32133cc3-d6eb-48c5-a3fc-11e820ed8a48","Type":"ContainerStarted","Data":"ce69f607e431e2d95f65518965c5951a85dbaf01cd26c7e35a0121e2af38d286"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.384687 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" event={"ID":"36375488-d0da-488c-b0ac-1e4f63490cbd","Type":"ContainerStarted","Data":"896d8a42c420298a7941737d9bde1187363fa70a0787b4bc7d8393d0525b21e1"} Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.386699 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkpvz\" (UniqueName: \"kubernetes.io/projected/2c2c78bd-80a9-4543-b1d1-432d3a29d3e5-kube-api-access-wkpvz\") pod \"csi-hostpathplugin-xt76w\" (UID: \"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5\") " pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.403743 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb695\" (UniqueName: \"kubernetes.io/projected/710c8c8c-988b-4d7c-b91d-52724634b484-kube-api-access-cb695\") pod \"packageserver-d55dfcdfc-pgsgg\" (UID: \"710c8c8c-988b-4d7c-b91d-52724634b484\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.405217 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.421767 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.429585 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4cmcz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.430436 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.443524 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.456199 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4d2g"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.481111 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.488245 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489189 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvmvt\" (UniqueName: \"kubernetes.io/projected/de9cd657-3380-48f7-a15c-dd81cdecc57d-kube-api-access-mvmvt\") pod \"migrator-59844c95c7-s9lpz\" (UID: \"de9cd657-3380-48f7-a15c-dd81cdecc57d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489216 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-client\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489254 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-metrics-certs\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489270 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489287 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9974b977-96c4-4235-8dc8-1bc10284e536-node-bootstrap-token\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489329 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrht\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-kube-api-access-kjrht\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489344 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-signing-cabundle\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489369 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkzm\" (UniqueName: \"kubernetes.io/projected/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-kube-api-access-kqkzm\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489384 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489398 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c118be1-d423-45f8-b280-e72a2773178d-srv-cert\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.489419 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwjq\" (UniqueName: \"kubernetes.io/projected/6c118be1-d423-45f8-b280-e72a2773178d-kube-api-access-lkwjq\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490344 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdjr\" (UniqueName: \"kubernetes.io/projected/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-kube-api-access-mtdjr\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490389 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f2acf793-348d-40a3-8433-7c82d748228b-srv-cert\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: W1126 07:02:50.490402 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56714b37_2c6a_42d6_8f7f_c8302a61bd6f.slice/crio-7544bf0de85c1ec61731a04b395197a596aa1599dc71d7d05c0c3c857d56ddb7 WatchSource:0}: Error finding container 7544bf0de85c1ec61731a04b395197a596aa1599dc71d7d05c0c3c857d56ddb7: Status 404 returned error can't find the container with id 7544bf0de85c1ec61731a04b395197a596aa1599dc71d7d05c0c3c857d56ddb7 Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490415 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03e3a595-33da-47a5-ba74-cb7c535134ca-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490436 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d27a19b-96d4-4443-a6c9-20cbd57d3850-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490454 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-default-certificate\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490469 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43dfd865-c878-44c6-96cd-5b8fadfbc25f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490487 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrr9d\" (UniqueName: \"kubernetes.io/projected/f2acf793-348d-40a3-8433-7c82d748228b-kube-api-access-xrr9d\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490513 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx84r\" (UniqueName: \"kubernetes.io/projected/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-kube-api-access-vx84r\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490528 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-tls\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490544 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-metrics-tls\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490583 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d979f\" (UniqueName: \"kubernetes.io/projected/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-kube-api-access-d979f\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.490680 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d27a19b-96d4-4443-a6c9-20cbd57d3850-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491037 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-serving-cert\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491073 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c118be1-d423-45f8-b280-e72a2773178d-profile-collector-cert\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491150 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9msb\" (UniqueName: \"kubernetes.io/projected/9e4f7691-475b-4ab8-9c1b-8f482fe9424c-kube-api-access-n9msb\") pod \"multus-admission-controller-857f4d67dd-bm9vz\" (UID: \"9e4f7691-475b-4ab8-9c1b-8f482fe9424c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491193 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-signing-key\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491225 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-stats-auth\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491269 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-service-ca\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491461 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-config\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491505 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-config-volume\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491525 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5ks\" (UniqueName: \"kubernetes.io/projected/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-kube-api-access-9d5ks\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491638 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491936 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9e4f7691-475b-4ab8-9c1b-8f482fe9424c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm9vz\" (UID: \"9e4f7691-475b-4ab8-9c1b-8f482fe9424c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491971 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-certificates\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.491993 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-trusted-ca\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492012 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9974b977-96c4-4235-8dc8-1bc10284e536-certs\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492060 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sx4d\" (UniqueName: \"kubernetes.io/projected/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-kube-api-access-5sx4d\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492074 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-ca\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492090 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492132 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43dfd865-c878-44c6-96cd-5b8fadfbc25f-config\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492195 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-trusted-ca\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492217 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492264 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492283 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d27a19b-96d4-4443-a6c9-20cbd57d3850-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492306 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-service-ca-bundle\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492326 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl97m\" (UniqueName: \"kubernetes.io/projected/22ae4443-3879-489b-a556-474a11712c47-kube-api-access-vl97m\") pod \"control-plane-machine-set-operator-78cbb6b69f-9g6s4\" (UID: \"22ae4443-3879-489b-a556-474a11712c47\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492343 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9z9m\" (UniqueName: \"kubernetes.io/projected/9974b977-96c4-4235-8dc8-1bc10284e536-kube-api-access-l9z9m\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492362 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43dfd865-c878-44c6-96cd-5b8fadfbc25f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.492583 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sml5q\" (UniqueName: \"kubernetes.io/projected/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-kube-api-access-sml5q\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.493129 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/22ae4443-3879-489b-a556-474a11712c47-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9g6s4\" (UID: \"22ae4443-3879-489b-a556-474a11712c47\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.493195 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-bound-sa-token\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.493242 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03e3a595-33da-47a5-ba74-cb7c535134ca-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.493266 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-secret-volume\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.494141 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f2acf793-348d-40a3-8433-7c82d748228b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.495101 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.496218 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:50.996194402 +0000 UTC m=+143.142405638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.499490 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.509220 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.513087 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zpp77"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.515681 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" Nov 26 07:02:50 crc kubenswrapper[4909]: W1126 07:02:50.547907 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1f4c3b4_d536_424d_aecc_c1ea2228940f.slice/crio-b86939f5aa0a123b24dc1af8a04af69b2dbdb4de3ecffb58517d459dcae56260 WatchSource:0}: Error finding container b86939f5aa0a123b24dc1af8a04af69b2dbdb4de3ecffb58517d459dcae56260: Status 404 returned error can't find the container with id b86939f5aa0a123b24dc1af8a04af69b2dbdb4de3ecffb58517d459dcae56260 Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595329 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595548 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f2acf793-348d-40a3-8433-7c82d748228b-srv-cert\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03e3a595-33da-47a5-ba74-cb7c535134ca-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595657 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d27a19b-96d4-4443-a6c9-20cbd57d3850-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595708 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-default-certificate\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595783 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43dfd865-c878-44c6-96cd-5b8fadfbc25f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595828 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrr9d\" (UniqueName: \"kubernetes.io/projected/f2acf793-348d-40a3-8433-7c82d748228b-kube-api-access-xrr9d\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595883 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx84r\" (UniqueName: \"kubernetes.io/projected/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-kube-api-access-vx84r\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595906 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-tls\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595955 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-metrics-tls\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.595979 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d979f\" (UniqueName: \"kubernetes.io/projected/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-kube-api-access-d979f\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596040 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d27a19b-96d4-4443-a6c9-20cbd57d3850-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.596091 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.096066623 +0000 UTC m=+143.242277789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596132 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c118be1-d423-45f8-b280-e72a2773178d-profile-collector-cert\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596179 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-serving-cert\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596199 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-config-volume\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596229 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9msb\" (UniqueName: \"kubernetes.io/projected/9e4f7691-475b-4ab8-9c1b-8f482fe9424c-kube-api-access-n9msb\") pod \"multus-admission-controller-857f4d67dd-bm9vz\" (UID: \"9e4f7691-475b-4ab8-9c1b-8f482fe9424c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596250 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-signing-key\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596289 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-stats-auth\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596326 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-service-ca\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596367 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-config\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596385 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-config-volume\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596405 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5ks\" (UniqueName: \"kubernetes.io/projected/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-kube-api-access-9d5ks\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596454 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596492 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9e4f7691-475b-4ab8-9c1b-8f482fe9424c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm9vz\" (UID: \"9e4f7691-475b-4ab8-9c1b-8f482fe9424c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596554 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-certificates\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596640 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-trusted-ca\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596668 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9974b977-96c4-4235-8dc8-1bc10284e536-certs\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596777 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee33a2d3-cdb8-4301-9aa5-281eba47d3e5-cert\") pod \"ingress-canary-xhcl5\" (UID: \"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5\") " pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.596907 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sx4d\" (UniqueName: \"kubernetes.io/projected/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-kube-api-access-5sx4d\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597127 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597309 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-ca\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597332 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43dfd865-c878-44c6-96cd-5b8fadfbc25f-config\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597418 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-metrics-tls\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597459 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wd8s\" (UniqueName: \"kubernetes.io/projected/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-kube-api-access-5wd8s\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597477 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597498 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-trusted-ca\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597541 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597558 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d27a19b-96d4-4443-a6c9-20cbd57d3850-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597575 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl97m\" (UniqueName: \"kubernetes.io/projected/22ae4443-3879-489b-a556-474a11712c47-kube-api-access-vl97m\") pod \"control-plane-machine-set-operator-78cbb6b69f-9g6s4\" (UID: \"22ae4443-3879-489b-a556-474a11712c47\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597665 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9z9m\" (UniqueName: \"kubernetes.io/projected/9974b977-96c4-4235-8dc8-1bc10284e536-kube-api-access-l9z9m\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597714 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-service-ca-bundle\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597796 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43dfd865-c878-44c6-96cd-5b8fadfbc25f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597872 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sml5q\" (UniqueName: \"kubernetes.io/projected/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-kube-api-access-sml5q\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597900 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-bound-sa-token\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597938 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/22ae4443-3879-489b-a556-474a11712c47-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9g6s4\" (UID: \"22ae4443-3879-489b-a556-474a11712c47\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.597993 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03e3a595-33da-47a5-ba74-cb7c535134ca-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.598047 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-secret-volume\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.598065 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f2acf793-348d-40a3-8433-7c82d748228b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.598154 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvmvt\" (UniqueName: \"kubernetes.io/projected/de9cd657-3380-48f7-a15c-dd81cdecc57d-kube-api-access-mvmvt\") pod \"migrator-59844c95c7-s9lpz\" (UID: \"de9cd657-3380-48f7-a15c-dd81cdecc57d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.598820 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-client\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.598911 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.598938 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9974b977-96c4-4235-8dc8-1bc10284e536-node-bootstrap-token\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.599792 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-metrics-certs\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.599865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrht\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-kube-api-access-kjrht\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.599918 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnhg\" (UniqueName: \"kubernetes.io/projected/ee33a2d3-cdb8-4301-9aa5-281eba47d3e5-kube-api-access-xpnhg\") pod \"ingress-canary-xhcl5\" (UID: \"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5\") " pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.599963 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqkzm\" (UniqueName: \"kubernetes.io/projected/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-kube-api-access-kqkzm\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.600198 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-signing-cabundle\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.600351 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.600384 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c118be1-d423-45f8-b280-e72a2773178d-srv-cert\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.602780 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-service-ca-bundle\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.604362 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-config-volume\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.610603 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.613006 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-config\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.613552 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/22ae4443-3879-489b-a556-474a11712c47-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9g6s4\" (UID: \"22ae4443-3879-489b-a556-474a11712c47\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.613717 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-service-ca\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.617040 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.618021 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03e3a595-33da-47a5-ba74-cb7c535134ca-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.618358 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-trusted-ca\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.618712 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-certificates\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.619488 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-ca\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.621946 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-signing-cabundle\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.622759 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f2acf793-348d-40a3-8433-7c82d748228b-srv-cert\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.627550 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-trusted-ca\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.627970 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.127953899 +0000 UTC m=+143.274165065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.628575 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.629067 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.629819 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d27a19b-96d4-4443-a6c9-20cbd57d3850-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.629842 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43dfd865-c878-44c6-96cd-5b8fadfbc25f-config\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.630145 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkwjq\" (UniqueName: \"kubernetes.io/projected/6c118be1-d423-45f8-b280-e72a2773178d-kube-api-access-lkwjq\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.630555 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdjr\" (UniqueName: \"kubernetes.io/projected/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-kube-api-access-mtdjr\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.635147 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9974b977-96c4-4235-8dc8-1bc10284e536-certs\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.635438 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-metrics-tls\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.635643 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-secret-volume\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.638295 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9974b977-96c4-4235-8dc8-1bc10284e536-node-bootstrap-token\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.638929 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c118be1-d423-45f8-b280-e72a2773178d-profile-collector-cert\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.638966 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.640308 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-serving-cert\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.641835 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43dfd865-c878-44c6-96cd-5b8fadfbc25f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.648195 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c118be1-d423-45f8-b280-e72a2773178d-srv-cert\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.650948 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-signing-key\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.652837 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-etcd-client\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.653145 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-stats-auth\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.653179 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-default-certificate\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.653264 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d27a19b-96d4-4443-a6c9-20cbd57d3850-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.653496 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f2acf793-348d-40a3-8433-7c82d748228b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.653658 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9e4f7691-475b-4ab8-9c1b-8f482fe9424c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm9vz\" (UID: \"9e4f7691-475b-4ab8-9c1b-8f482fe9424c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.653901 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-metrics-certs\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.654030 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-tls\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.654747 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03e3a595-33da-47a5-ba74-cb7c535134ca-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.663986 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrr9d\" (UniqueName: \"kubernetes.io/projected/f2acf793-348d-40a3-8433-7c82d748228b-kube-api-access-xrr9d\") pod \"olm-operator-6b444d44fb-p5hhl\" (UID: \"f2acf793-348d-40a3-8433-7c82d748228b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.679152 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d27a19b-96d4-4443-a6c9-20cbd57d3850-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wsqtg\" (UID: \"8d27a19b-96d4-4443-a6c9-20cbd57d3850\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.695932 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d979f\" (UniqueName: \"kubernetes.io/projected/c027d7f7-ada5-4c58-a49f-b38cfe15c37a-kube-api-access-d979f\") pod \"etcd-operator-b45778765-dgf4b\" (UID: \"c027d7f7-ada5-4c58-a49f-b38cfe15c37a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.707373 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx84r\" (UniqueName: \"kubernetes.io/projected/0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2-kube-api-access-vx84r\") pod \"kube-storage-version-migrator-operator-b67b599dd-sr6mp\" (UID: \"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.731514 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.731826 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.231795055 +0000 UTC m=+143.378006221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.731918 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnhg\" (UniqueName: \"kubernetes.io/projected/ee33a2d3-cdb8-4301-9aa5-281eba47d3e5-kube-api-access-xpnhg\") pod \"ingress-canary-xhcl5\" (UID: \"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5\") " pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.732015 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-config-volume\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.732070 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee33a2d3-cdb8-4301-9aa5-281eba47d3e5-cert\") pod \"ingress-canary-xhcl5\" (UID: \"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5\") " pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.732145 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-metrics-tls\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.732188 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wd8s\" (UniqueName: \"kubernetes.io/projected/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-kube-api-access-5wd8s\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.732227 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.733309 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.233289155 +0000 UTC m=+143.379500321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.737601 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-config-volume\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.748153 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9z9m\" (UniqueName: \"kubernetes.io/projected/9974b977-96c4-4235-8dc8-1bc10284e536-kube-api-access-l9z9m\") pod \"machine-config-server-59zk5\" (UID: \"9974b977-96c4-4235-8dc8-1bc10284e536\") " pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.766938 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-metrics-tls\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.769635 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-bound-sa-token\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.770675 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43dfd865-c878-44c6-96cd-5b8fadfbc25f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xt7tf\" (UID: \"43dfd865-c878-44c6-96cd-5b8fadfbc25f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.773827 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee33a2d3-cdb8-4301-9aa5-281eba47d3e5-cert\") pod \"ingress-canary-xhcl5\" (UID: \"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5\") " pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.774133 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-59zk5" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.785759 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5ks\" (UniqueName: \"kubernetes.io/projected/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-kube-api-access-9d5ks\") pod \"collect-profiles-29402340-xfxg7\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.822450 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sml5q\" (UniqueName: \"kubernetes.io/projected/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-kube-api-access-sml5q\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.836159 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvmvt\" (UniqueName: \"kubernetes.io/projected/de9cd657-3380-48f7-a15c-dd81cdecc57d-kube-api-access-mvmvt\") pod \"migrator-59844c95c7-s9lpz\" (UID: \"de9cd657-3380-48f7-a15c-dd81cdecc57d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.836347 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.836972 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.336949927 +0000 UTC m=+143.483161133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.837017 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.837404 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.337388018 +0000 UTC m=+143.483599184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.850360 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrht\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-kube-api-access-kjrht\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.894198 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqkzm\" (UniqueName: \"kubernetes.io/projected/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-kube-api-access-kqkzm\") pod \"marketplace-operator-79b997595-g6sfv\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.900376 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.918976 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sx4d\" (UniqueName: \"kubernetes.io/projected/cd7f3942-e3a5-47ca-a9eb-becfaa64d62a-kube-api-access-5sx4d\") pod \"router-default-5444994796-7zgjj\" (UID: \"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a\") " pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.919649 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.927640 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9msb\" (UniqueName: \"kubernetes.io/projected/9e4f7691-475b-4ab8-9c1b-8f482fe9424c-kube-api-access-n9msb\") pod \"multus-admission-controller-857f4d67dd-bm9vz\" (UID: \"9e4f7691-475b-4ab8-9c1b-8f482fe9424c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.937812 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:50 crc kubenswrapper[4909]: E1126 07:02:50.938186 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.438170623 +0000 UTC m=+143.584381789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.947706 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl97m\" (UniqueName: \"kubernetes.io/projected/22ae4443-3879-489b-a556-474a11712c47-kube-api-access-vl97m\") pod \"control-plane-machine-set-operator-78cbb6b69f-9g6s4\" (UID: \"22ae4443-3879-489b-a556-474a11712c47\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.951816 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.967621 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.970025 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54ec2236-5c8f-4d51-97d0-2145a8c91a0c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gvvmw\" (UID: \"54ec2236-5c8f-4d51-97d0-2145a8c91a0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.982393 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.985663 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.992047 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.992231 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkwjq\" (UniqueName: \"kubernetes.io/projected/6c118be1-d423-45f8-b280-e72a2773178d-kube-api-access-lkwjq\") pod \"catalog-operator-68c6474976-kxkmb\" (UID: \"6c118be1-d423-45f8-b280-e72a2773178d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.998273 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg"] Nov 26 07:02:50 crc kubenswrapper[4909]: I1126 07:02:50.998470 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.012684 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.014284 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdjr\" (UniqueName: \"kubernetes.io/projected/eb3ad8b7-9f5f-49e7-9509-dca22cd87226-kube-api-access-mtdjr\") pod \"service-ca-9c57cc56f-fbn7q\" (UID: \"eb3ad8b7-9f5f-49e7-9509-dca22cd87226\") " pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.034534 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnhg\" (UniqueName: \"kubernetes.io/projected/ee33a2d3-cdb8-4301-9aa5-281eba47d3e5-kube-api-access-xpnhg\") pod \"ingress-canary-xhcl5\" (UID: \"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5\") " pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.039939 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.040352 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.540339685 +0000 UTC m=+143.686550851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.052874 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-psqx2"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.052930 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.052943 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.053206 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.058173 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wd8s\" (UniqueName: \"kubernetes.io/projected/e3a459d8-796b-4ad3-9a1f-21e7694eb4a9-kube-api-access-5wd8s\") pod \"dns-default-r8xvl\" (UID: \"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9\") " pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.062651 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.096335 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4cmcz"] Nov 26 07:02:51 crc kubenswrapper[4909]: W1126 07:02:51.103600 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0422a643_8fdb_4a70_b120_182517c46a6c.slice/crio-13822a99042999b669f9c528c4e86402dd3cbac3962082876dd00653a9d6e66d WatchSource:0}: Error finding container 13822a99042999b669f9c528c4e86402dd3cbac3962082876dd00653a9d6e66d: Status 404 returned error can't find the container with id 13822a99042999b669f9c528c4e86402dd3cbac3962082876dd00653a9d6e66d Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.108970 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xhcl5" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.118665 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.149045 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.165963 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.665937638 +0000 UTC m=+143.812148804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.166860 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.181412 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.185880 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.220905 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.258360 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.258740 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.758727661 +0000 UTC m=+143.904938827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.260747 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.260789 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xt76w"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.276255 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6sfv"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.289663 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl"] Nov 26 07:02:51 crc kubenswrapper[4909]: W1126 07:02:51.308463 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97e5a116_5615_4290_bee9_44f45f2433df.slice/crio-5ba200acf5ebba7335b4211b55499d76ce26f6700f310099ce8f8e560a9711a8 WatchSource:0}: Error finding container 5ba200acf5ebba7335b4211b55499d76ce26f6700f310099ce8f8e560a9711a8: Status 404 returned error can't find the container with id 5ba200acf5ebba7335b4211b55499d76ce26f6700f310099ce8f8e560a9711a8 Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.343665 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.345067 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.354507 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.354547 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.359147 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.359677 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.85966029 +0000 UTC m=+144.005871456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.390928 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" event={"ID":"4047b0cd-e695-4486-a4d9-705b6e9863f2","Type":"ContainerStarted","Data":"dc18f7f406b2130eff921653b9ff3f43c8c3918d4646c7f8afda9d9f90d00b34"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.392472 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" event={"ID":"190a5902-e892-467f-8f9b-6d4b844cbc90","Type":"ContainerStarted","Data":"c43b488662e255d888036912a5a722ac2f7dad30a3ac9335ee45f85d615736e9"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.392520 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" event={"ID":"190a5902-e892-467f-8f9b-6d4b844cbc90","Type":"ContainerStarted","Data":"b29777fe4a1aa1d394ee778b05ecb91a3186aa0879888d7b0d1e03bf66f0fd3d"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.392532 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" event={"ID":"190a5902-e892-467f-8f9b-6d4b844cbc90","Type":"ContainerStarted","Data":"7cfd3dd1da16131502ed881ddf04f85bf4192d0d5a3da04af43b8c4cdb9bab38"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.400720 4909 generic.go:334] "Generic (PLEG): container finished" podID="bb0d84e2-45ac-4936-b267-d75214779f91" containerID="ca97eff731beb931f51071c438372e1eced8e8ce1712a036032b864f2da3e09b" exitCode=0 Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.400800 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" event={"ID":"bb0d84e2-45ac-4936-b267-d75214779f91","Type":"ContainerDied","Data":"ca97eff731beb931f51071c438372e1eced8e8ce1712a036032b864f2da3e09b"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.401756 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" event={"ID":"710c8c8c-988b-4d7c-b91d-52724634b484","Type":"ContainerStarted","Data":"889f9987f800ed77b9ecc0115ae893bf823cd9250d8db0761a06d2f44956caaa"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.403737 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" event={"ID":"56714b37-2c6a-42d6-8f7f-c8302a61bd6f","Type":"ContainerStarted","Data":"bd6efc55423593f42ecf8e3452dc659bb97a59f67c29ae3fda26756687d1f726"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.403763 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" event={"ID":"56714b37-2c6a-42d6-8f7f-c8302a61bd6f","Type":"ContainerStarted","Data":"169ffd7c24d28d1b2ec7244b68cd12c6f5deefa8c1509beb752859bb76ceea99"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.403773 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" event={"ID":"56714b37-2c6a-42d6-8f7f-c8302a61bd6f","Type":"ContainerStarted","Data":"7544bf0de85c1ec61731a04b395197a596aa1599dc71d7d05c0c3c857d56ddb7"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.407044 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" event={"ID":"36375488-d0da-488c-b0ac-1e4f63490cbd","Type":"ContainerStarted","Data":"dd6baaa31c0b9557fb5c3890b51ddd7e5d10c7baf570e3033a67219d9ecac23b"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.407182 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.410141 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" event={"ID":"543475d8-ab28-4406-83cf-ca4c0aecd157","Type":"ContainerStarted","Data":"9e34c5980b590cdcb925d54c7a9784a0bce7a926bccec74caea6e0eaa41fce73"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.410404 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.412016 4909 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-68dmw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" start-of-body= Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.412035 4909 patch_prober.go:28] interesting pod/console-operator-58897d9998-r4q2l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.412059 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" podUID="36375488-d0da-488c-b0ac-1e4f63490cbd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.412086 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" podUID="543475d8-ab28-4406-83cf-ca4c0aecd157" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.416480 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" event={"ID":"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0","Type":"ContainerStarted","Data":"097ebcce8d62267f7a1fddedd1da701a1dd634f31350dd1da0288adcd97ea50e"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.417525 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" event={"ID":"0422a643-8fdb-4a70-b120-182517c46a6c","Type":"ContainerStarted","Data":"13822a99042999b669f9c528c4e86402dd3cbac3962082876dd00653a9d6e66d"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.418904 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" event={"ID":"43fdcbf7-b990-48a0-8148-5fd13c5ba035","Type":"ContainerStarted","Data":"2cc31692cbceb32394347fb566b26d4fa5407433c024e062997be2b90fa4e9ad"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.420245 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" event={"ID":"0f21f776-e2f4-41e5-bdb9-6639817afa17","Type":"ContainerStarted","Data":"cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.420277 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" event={"ID":"0f21f776-e2f4-41e5-bdb9-6639817afa17","Type":"ContainerStarted","Data":"a1a36d0fc7ab1cc5e53105f759c1af76ec3b7df67d755f7b5e00eef5f4bd134d"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.420621 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.422288 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f87108b-bbab-4f72-a974-0cb8d188890d" containerID="5425b893b7af9c66c5502f278b4243e4489fb46be9e869863176bad39d752aa8" exitCode=0 Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.422349 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" event={"ID":"9f87108b-bbab-4f72-a974-0cb8d188890d","Type":"ContainerDied","Data":"5425b893b7af9c66c5502f278b4243e4489fb46be9e869863176bad39d752aa8"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.422371 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" event={"ID":"9f87108b-bbab-4f72-a974-0cb8d188890d","Type":"ContainerStarted","Data":"e207e6654d9a0aaabdab7ea39fe0b0069cc491607f42a456237be292c74c094b"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.423419 4909 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2qlc6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.423449 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" podUID="0f21f776-e2f4-41e5-bdb9-6639817afa17" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.424864 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" event={"ID":"72f34a8b-b736-4737-8f2f-5471054c40f2","Type":"ContainerStarted","Data":"cef96017aaa5ded00fe0473df1ea3c5b4f7340b186e34ca055b696c3758a0c88"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.426021 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" event={"ID":"9e89c18c-706c-4894-a472-01f259f2c854","Type":"ContainerStarted","Data":"0b1521f3cfa5fd8bdefd44008e1e8c636dfe8a6a3fda1a6d5935ff06552c67a5"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.426042 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" event={"ID":"9e89c18c-706c-4894-a472-01f259f2c854","Type":"ContainerStarted","Data":"87e6d8d7b770d9ddfb0fc7c376f4a5f643f46bc05ac118b38ad73ce1bf7fd15c"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.430247 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" event={"ID":"dc63a259-02f9-4ab8-83ac-04baf7e15766","Type":"ContainerStarted","Data":"7000d3425ce92b8d8e61f73eecfba1f2ff471d6586dd3bc1de6a791dd8fe6d08"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.433391 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-59zk5" event={"ID":"9974b977-96c4-4235-8dc8-1bc10284e536","Type":"ContainerStarted","Data":"fd5c0f0606e68e82e09e6a778e5c45cbe8de0dea548a21e03e81d93d08809d82"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.433423 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-59zk5" event={"ID":"9974b977-96c4-4235-8dc8-1bc10284e536","Type":"ContainerStarted","Data":"f76b00db83c59e268d3814da2177a48597a00f32b8b09ea61ab035ec554edd58"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.434883 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" event={"ID":"c1f4c3b4-d536-424d-aecc-c1ea2228940f","Type":"ContainerStarted","Data":"343e42f1602625ba3a58f3a47ae43e14779bca267c6187ce1d97837052516b8f"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.434921 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" event={"ID":"c1f4c3b4-d536-424d-aecc-c1ea2228940f","Type":"ContainerStarted","Data":"b86939f5aa0a123b24dc1af8a04af69b2dbdb4de3ecffb58517d459dcae56260"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.436964 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" event={"ID":"97e5a116-5615-4290-bee9-44f45f2433df","Type":"ContainerStarted","Data":"5ba200acf5ebba7335b4211b55499d76ce26f6700f310099ce8f8e560a9711a8"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.438421 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4cmcz" event={"ID":"cd48299d-0c3f-4475-b4f5-a00d85b71393","Type":"ContainerStarted","Data":"19b43769bc8462634d416e4153aa24bc7aca68be7554c8b4324dec472d0fd7e8"} Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.460164 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.460537 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:51.960521777 +0000 UTC m=+144.106732943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.500926 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mdwd4" podStartSLOduration=118.500894709 podStartE2EDuration="1m58.500894709s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:51.500499899 +0000 UTC m=+143.646711065" watchObservedRunningTime="2025-11-26 07:02:51.500894709 +0000 UTC m=+143.647105875" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.542875 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-f7bmk" podStartSLOduration=118.542859083 podStartE2EDuration="1m58.542859083s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:51.540570292 +0000 UTC m=+143.686781458" watchObservedRunningTime="2025-11-26 07:02:51.542859083 +0000 UTC m=+143.689070249" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.561658 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.561950 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.061922349 +0000 UTC m=+144.208133515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.562469 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.563487 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.06347838 +0000 UTC m=+144.209689626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: W1126 07:02:51.637720 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11bcb7f4_f89c_4a95_824a_6388e3f69aa5.slice/crio-7b02cdf75db6dbbf480f6257a1bec91694fae72609bd4bb5f96c049b327a076d WatchSource:0}: Error finding container 7b02cdf75db6dbbf480f6257a1bec91694fae72609bd4bb5f96c049b327a076d: Status 404 returned error can't find the container with id 7b02cdf75db6dbbf480f6257a1bec91694fae72609bd4bb5f96c049b327a076d Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.685130 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.185103068 +0000 UTC m=+144.331314234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.684975 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.687690 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.688322 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.188307644 +0000 UTC m=+144.334518810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.698841 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-59zk5" podStartSLOduration=4.698559885 podStartE2EDuration="4.698559885s" podCreationTimestamp="2025-11-26 07:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:51.689221468 +0000 UTC m=+143.835432644" watchObservedRunningTime="2025-11-26 07:02:51.698559885 +0000 UTC m=+143.844771051" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.769636 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4"] Nov 26 07:02:51 crc kubenswrapper[4909]: W1126 07:02:51.783175 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7f3942_e3a5_47ca_a9eb_becfaa64d62a.slice/crio-f092ebe5991c51669bd5d136f41fcf892eee363950c574a033b758548d08c3df WatchSource:0}: Error finding container f092ebe5991c51669bd5d136f41fcf892eee363950c574a033b758548d08c3df: Status 404 returned error can't find the container with id f092ebe5991c51669bd5d136f41fcf892eee363950c574a033b758548d08c3df Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.787264 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.789420 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.790031 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.290007743 +0000 UTC m=+144.436218909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.792403 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.795077 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.797962 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgf4b"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.799431 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7"] Nov 26 07:02:51 crc kubenswrapper[4909]: W1126 07:02:51.829431 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c118be1_d423_45f8_b280_e72a2773178d.slice/crio-dfe6ccd7e459023201f9146b3d3e43c5c0a21fb83eb63223edce225470f27b12 WatchSource:0}: Error finding container dfe6ccd7e459023201f9146b3d3e43c5c0a21fb83eb63223edce225470f27b12: Status 404 returned error can't find the container with id dfe6ccd7e459023201f9146b3d3e43c5c0a21fb83eb63223edce225470f27b12 Nov 26 07:02:51 crc kubenswrapper[4909]: W1126 07:02:51.846137 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d27a19b_96d4_4443_a6c9_20cbd57d3850.slice/crio-dcfafa1129d13285209875bb081870ad444e7677b78a2e16a5be7bf423f4dc26 WatchSource:0}: Error finding container dcfafa1129d13285209875bb081870ad444e7677b78a2e16a5be7bf423f4dc26: Status 404 returned error can't find the container with id dcfafa1129d13285209875bb081870ad444e7677b78a2e16a5be7bf423f4dc26 Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.872568 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm9vz"] Nov 26 07:02:51 crc kubenswrapper[4909]: W1126 07:02:51.887440 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e4f7691_475b_4ab8_9c1b_8f482fe9424c.slice/crio-646c3246bd27650c24043fc2baf3261e5ab40f560c29d62a471fe4b62de3aa17 WatchSource:0}: Error finding container 646c3246bd27650c24043fc2baf3261e5ab40f560c29d62a471fe4b62de3aa17: Status 404 returned error can't find the container with id 646c3246bd27650c24043fc2baf3261e5ab40f560c29d62a471fe4b62de3aa17 Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.891895 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.892271 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.392258056 +0000 UTC m=+144.538469222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.892672 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf"] Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.903429 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" podStartSLOduration=118.903412532 podStartE2EDuration="1m58.903412532s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:51.903047023 +0000 UTC m=+144.049258179" watchObservedRunningTime="2025-11-26 07:02:51.903412532 +0000 UTC m=+144.049623698" Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.992775 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.993067 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.493021941 +0000 UTC m=+144.639233107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:51 crc kubenswrapper[4909]: I1126 07:02:51.993268 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:51 crc kubenswrapper[4909]: E1126 07:02:51.993697 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.493677309 +0000 UTC m=+144.639888485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.003408 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cr2tp" podStartSLOduration=119.003385216 podStartE2EDuration="1m59.003385216s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:51.98472789 +0000 UTC m=+144.130939056" watchObservedRunningTime="2025-11-26 07:02:52.003385216 +0000 UTC m=+144.149596382" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.003746 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r8xvl"] Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.005968 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fbn7q"] Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.008975 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xhcl5"] Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.010886 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw"] Nov 26 07:02:52 crc kubenswrapper[4909]: W1126 07:02:52.033742 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43dfd865_c878_44c6_96cd_5b8fadfbc25f.slice/crio-393ace5f990a54352984c564a5e58ddab4254f8296c48b6340250082381a12ac WatchSource:0}: Error finding container 393ace5f990a54352984c564a5e58ddab4254f8296c48b6340250082381a12ac: Status 404 returned error can't find the container with id 393ace5f990a54352984c564a5e58ddab4254f8296c48b6340250082381a12ac Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.100397 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.100737 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.60072054 +0000 UTC m=+144.746931706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.202441 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.202850 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.702828089 +0000 UTC m=+144.849039255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.220141 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tsxh4" podStartSLOduration=119.220122658 podStartE2EDuration="1m59.220122658s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:52.217211301 +0000 UTC m=+144.363422467" watchObservedRunningTime="2025-11-26 07:02:52.220122658 +0000 UTC m=+144.366333824" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.304053 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.305812 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.805795413 +0000 UTC m=+144.952006579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.305856 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.306448 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.8064395 +0000 UTC m=+144.952650666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.408802 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.409308 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:52.909284639 +0000 UTC m=+145.055495845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.469858 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" event={"ID":"de9cd657-3380-48f7-a15c-dd81cdecc57d","Type":"ContainerStarted","Data":"138c86ce945b17091be16354ec92e0e31a60ceb6c08ae8714ec667f1eb8b72c9"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.484478 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" event={"ID":"c027d7f7-ada5-4c58-a49f-b38cfe15c37a","Type":"ContainerStarted","Data":"9e39169d051d23b6b87e417c0b7d35b976c96b52a130d4455a8b8d2b89f564dd"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.489924 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" event={"ID":"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0","Type":"ContainerStarted","Data":"9c4c101a82e16fe762922477ac642799642373e135ac4a0ac1c5e6cf2d887569"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.491221 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" event={"ID":"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f","Type":"ContainerStarted","Data":"b0288266e8f6783bdeadc7b7e22d0aff2ea97479a9eb618c427cb582d8ecd210"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.509942 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.510396 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.010370962 +0000 UTC m=+145.156582128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.518938 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" event={"ID":"9e89c18c-706c-4894-a472-01f259f2c854","Type":"ContainerStarted","Data":"078343c8ca9cd83153c8ba8f91b1f2b6fab7698a51654e9436391b57b83c759a"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.522376 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" event={"ID":"4047b0cd-e695-4486-a4d9-705b6e9863f2","Type":"ContainerStarted","Data":"4672820bc9415a701c2fd229f76d8dec4614337cbf7b05606191146b78c87800"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.535131 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4cmcz" event={"ID":"cd48299d-0c3f-4475-b4f5-a00d85b71393","Type":"ContainerStarted","Data":"4cada5dd97a8cafd4e427b37635089f7a4122522b0363c0c5d5f4ca896092b35"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.535947 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4cmcz" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.538486 4909 patch_prober.go:28] interesting pod/downloads-7954f5f757-4cmcz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.538586 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4cmcz" podUID="cd48299d-0c3f-4475-b4f5-a00d85b71393" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.542144 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r8xvl" event={"ID":"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9","Type":"ContainerStarted","Data":"c530f27ab5f3c1d6c6bb72fccdd1ec1d83a7c177de27ee389b2af193ca762c25"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.551273 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" event={"ID":"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5","Type":"ContainerStarted","Data":"5d7594e2ff720e47cb657b6bd992a8dea4762dafa5d69014ffdf8ebc6cb8930c"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.553098 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" event={"ID":"f2acf793-348d-40a3-8433-7c82d748228b","Type":"ContainerStarted","Data":"65931960484a36500e874ce5f2458384c592e96456bdb1e29f28710f9fb2b735"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.563896 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" event={"ID":"7e4b01fa-600c-4784-877e-affbde07fb1d","Type":"ContainerStarted","Data":"e7e07a890c9e1eb0acab96479160521c02ddaefc8c828701e2c61a1ccc5be26e"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.568573 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" event={"ID":"480e0e98-6e8e-480e-bf79-fa4d6cba6582","Type":"ContainerStarted","Data":"f0a08495ba86f6ae0874d4253fdbdacb4187e1d058a5bbea599b8c2ad9f1764e"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.594839 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4d2g" podStartSLOduration=119.594822294 podStartE2EDuration="1m59.594822294s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:52.593351585 +0000 UTC m=+144.739562751" watchObservedRunningTime="2025-11-26 07:02:52.594822294 +0000 UTC m=+144.741033460" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.602366 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" event={"ID":"97e5a116-5615-4290-bee9-44f45f2433df","Type":"ContainerStarted","Data":"3724f8d0cbb06ed7d52d1f6cc473dc95a08b10af7ef3d600f215f28419e95ea2"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.611407 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" event={"ID":"0422a643-8fdb-4a70-b120-182517c46a6c","Type":"ContainerStarted","Data":"8bc2a90d8afcda3f6084d311b3f8fea2ad2c91157bb6ea7031d4b86143af84ac"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.611428 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.611606 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.111575859 +0000 UTC m=+145.257787025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.612249 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.614485 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.114470895 +0000 UTC m=+145.260682061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.615356 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" event={"ID":"9e4f7691-475b-4ab8-9c1b-8f482fe9424c","Type":"ContainerStarted","Data":"646c3246bd27650c24043fc2baf3261e5ab40f560c29d62a471fe4b62de3aa17"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.623206 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" event={"ID":"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2","Type":"ContainerStarted","Data":"c606c1bc3c85b76a236b59331a0d2b8cc7d009bc3a34d8e95587e03062047349"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.625869 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" event={"ID":"eb3ad8b7-9f5f-49e7-9509-dca22cd87226","Type":"ContainerStarted","Data":"76948b40d876e49339bce775dc731ec256773a402962b0e575b8ab91398c4f5f"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.627736 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" event={"ID":"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9","Type":"ContainerStarted","Data":"4ba79f6932c8d57c83c07a94db61e7fd808d4c1a82e4d28f85b6e804c5bcdeb6"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.629468 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" event={"ID":"11bcb7f4-f89c-4a95-824a-6388e3f69aa5","Type":"ContainerStarted","Data":"7b02cdf75db6dbbf480f6257a1bec91694fae72609bd4bb5f96c049b327a076d"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.630994 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" event={"ID":"2bd06321-043f-48ec-a6d7-b19de03ffbf6","Type":"ContainerStarted","Data":"417646800d74123978176cd598eeb8b5f971bddfde443551bd21a28a5b8c30cd"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.631908 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" event={"ID":"8d27a19b-96d4-4443-a6c9-20cbd57d3850","Type":"ContainerStarted","Data":"dcfafa1129d13285209875bb081870ad444e7677b78a2e16a5be7bf423f4dc26"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.633054 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" event={"ID":"54ec2236-5c8f-4d51-97d0-2145a8c91a0c","Type":"ContainerStarted","Data":"b685e476fa80d76bb7ed3375dd22aa76fe3a6d1771e0acf3d5f7ca348f1130ba"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.633632 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" event={"ID":"6c118be1-d423-45f8-b280-e72a2773178d","Type":"ContainerStarted","Data":"dfe6ccd7e459023201f9146b3d3e43c5c0a21fb83eb63223edce225470f27b12"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.634377 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" event={"ID":"710c8c8c-988b-4d7c-b91d-52724634b484","Type":"ContainerStarted","Data":"18cee79aaedd75be3440d4cecb0825d227d1eaa166b4e6f2b721cb4fa8f59462"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.635109 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.636814 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" event={"ID":"43dfd865-c878-44c6-96cd-5b8fadfbc25f","Type":"ContainerStarted","Data":"393ace5f990a54352984c564a5e58ddab4254f8296c48b6340250082381a12ac"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.638490 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xhcl5" event={"ID":"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5","Type":"ContainerStarted","Data":"b603c1a216c50bbe53bb0aa61ee560b4834a45273d50b0396975a6d9ea05ba3d"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.639172 4909 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pgsgg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.639201 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" podUID="710c8c8c-988b-4d7c-b91d-52724634b484" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.650392 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" event={"ID":"22ae4443-3879-489b-a556-474a11712c47","Type":"ContainerStarted","Data":"8de9e7d1c7ec889f01f99892cd577f3c1fbaef5c056ac401283f85e406f7b64f"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.651835 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7zgjj" event={"ID":"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a","Type":"ContainerStarted","Data":"f092ebe5991c51669bd5d136f41fcf892eee363950c574a033b758548d08c3df"} Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.652954 4909 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2qlc6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.652954 4909 patch_prober.go:28] interesting pod/console-operator-58897d9998-r4q2l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.652985 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" podUID="0f21f776-e2f4-41e5-bdb9-6639817afa17" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.652995 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" podUID="543475d8-ab28-4406-83cf-ca4c0aecd157" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.658556 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.662833 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" podStartSLOduration=119.662818908 podStartE2EDuration="1m59.662818908s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:52.658221837 +0000 UTC m=+144.804433003" watchObservedRunningTime="2025-11-26 07:02:52.662818908 +0000 UTC m=+144.809030074" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.717970 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.718191 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.218156857 +0000 UTC m=+145.364368033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.718453 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.720576 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.220559891 +0000 UTC m=+145.366771057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.757195 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" podStartSLOduration=119.757178793 podStartE2EDuration="1m59.757178793s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:52.756012213 +0000 UTC m=+144.902223399" watchObservedRunningTime="2025-11-26 07:02:52.757178793 +0000 UTC m=+144.903389959" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.822034 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bwg2r" podStartSLOduration=119.822012344 podStartE2EDuration="1m59.822012344s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:52.782551087 +0000 UTC m=+144.928762253" watchObservedRunningTime="2025-11-26 07:02:52.822012344 +0000 UTC m=+144.968223520" Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.825679 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.826041 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.326027411 +0000 UTC m=+145.472238567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:52 crc kubenswrapper[4909]: I1126 07:02:52.934234 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:52 crc kubenswrapper[4909]: E1126 07:02:52.934873 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.43485928 +0000 UTC m=+145.581070446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.027710 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-psqx2" podStartSLOduration=120.027696134 podStartE2EDuration="2m0.027696134s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.026284146 +0000 UTC m=+145.172495322" watchObservedRunningTime="2025-11-26 07:02:53.027696134 +0000 UTC m=+145.173907300" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.035104 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.035422 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.535405468 +0000 UTC m=+145.681616634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.100635 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nxb7l" podStartSLOduration=120.100617269 podStartE2EDuration="2m0.100617269s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.061619644 +0000 UTC m=+145.207830810" watchObservedRunningTime="2025-11-26 07:02:53.100617269 +0000 UTC m=+145.246828435" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.136936 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.137311 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.637299893 +0000 UTC m=+145.783511059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.178789 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4cmcz" podStartSLOduration=120.178770454 podStartE2EDuration="2m0.178770454s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.17677063 +0000 UTC m=+145.322981796" watchObservedRunningTime="2025-11-26 07:02:53.178770454 +0000 UTC m=+145.324981620" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.241318 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.241662 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.741647962 +0000 UTC m=+145.887859118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.251607 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" podStartSLOduration=120.251575196 podStartE2EDuration="2m0.251575196s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.215659303 +0000 UTC m=+145.361870469" watchObservedRunningTime="2025-11-26 07:02:53.251575196 +0000 UTC m=+145.397786352" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.254082 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" podStartSLOduration=120.254074772 podStartE2EDuration="2m0.254074772s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.25100027 +0000 UTC m=+145.397211436" watchObservedRunningTime="2025-11-26 07:02:53.254074772 +0000 UTC m=+145.400285938" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.348323 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.349025 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.849013421 +0000 UTC m=+145.995224587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.449274 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.449705 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:53.949690654 +0000 UTC m=+146.095901820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.553544 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.554019 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.053986743 +0000 UTC m=+146.200197909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.654010 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.654153 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.15413118 +0000 UTC m=+146.300342346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.654363 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.654775 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.154759587 +0000 UTC m=+146.300970753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.702081 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r8xvl" event={"ID":"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9","Type":"ContainerStarted","Data":"00caba594cabf99a52649ce703f6d345b48b9c189ad71b3800ddfb36a104762a"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.726977 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xhcl5" event={"ID":"ee33a2d3-cdb8-4301-9aa5-281eba47d3e5","Type":"ContainerStarted","Data":"580325b0a38cc923f7224b101adda2c7bdee4ed8a12b92f8c245a410cab7cf14"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.764301 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.764801 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.264775957 +0000 UTC m=+146.410987143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.767017 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" event={"ID":"0d3cb9e3-29ec-4ad4-aded-1ad2174a83b2","Type":"ContainerStarted","Data":"2ddccc37a262b8844ace12419804039667da5ab0d667b30e7175068e86a56514"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.772952 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xhcl5" podStartSLOduration=6.772936774 podStartE2EDuration="6.772936774s" podCreationTimestamp="2025-11-26 07:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.772283777 +0000 UTC m=+145.918494943" watchObservedRunningTime="2025-11-26 07:02:53.772936774 +0000 UTC m=+145.919147940" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.785364 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" event={"ID":"9f87108b-bbab-4f72-a974-0cb8d188890d","Type":"ContainerStarted","Data":"f6384fcdf545be9032740944c46bb0f720e2f5a32c810d494ca53fac5d1ce16d"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.810459 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" event={"ID":"0422a643-8fdb-4a70-b120-182517c46a6c","Type":"ContainerStarted","Data":"734ad1e7eaf78719007957ae912154c5fa7d6f5729f92afeb6af825a15f5caa1"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.810521 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.852078 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" event={"ID":"8d27a19b-96d4-4443-a6c9-20cbd57d3850","Type":"ContainerStarted","Data":"5b355ed602dcaf080b0898d32b90de90fb547d213be9431460c92ff81126bef3"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.869127 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.870285 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.370259067 +0000 UTC m=+146.516470233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.871255 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" podStartSLOduration=120.871229013 podStartE2EDuration="2m0.871229013s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.838110064 +0000 UTC m=+145.984321240" watchObservedRunningTime="2025-11-26 07:02:53.871229013 +0000 UTC m=+146.017440179" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.872399 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sr6mp" podStartSLOduration=120.872393583 podStartE2EDuration="2m0.872393583s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.795728859 +0000 UTC m=+145.941940025" watchObservedRunningTime="2025-11-26 07:02:53.872393583 +0000 UTC m=+146.018604749" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.882047 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wsqtg" podStartSLOduration=120.88203189 podStartE2EDuration="2m0.88203189s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.87866022 +0000 UTC m=+146.024871376" watchObservedRunningTime="2025-11-26 07:02:53.88203189 +0000 UTC m=+146.028243056" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.901420 4909 generic.go:334] "Generic (PLEG): container finished" podID="97e5a116-5615-4290-bee9-44f45f2433df" containerID="3724f8d0cbb06ed7d52d1f6cc473dc95a08b10af7ef3d600f215f28419e95ea2" exitCode=0 Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.901541 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" event={"ID":"97e5a116-5615-4290-bee9-44f45f2433df","Type":"ContainerDied","Data":"3724f8d0cbb06ed7d52d1f6cc473dc95a08b10af7ef3d600f215f28419e95ea2"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.940915 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" event={"ID":"eb3ad8b7-9f5f-49e7-9509-dca22cd87226","Type":"ContainerStarted","Data":"40347bf046f80bbe90a48c7a771f7aaf820de169a5865ced619a86ae62d3f6c7"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.963203 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" event={"ID":"43dfd865-c878-44c6-96cd-5b8fadfbc25f","Type":"ContainerStarted","Data":"10baa28b6e0ce47ff92fe2e84772f23c5b6a3b0623c5f8a4cd0b1556600eaaf5"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.971567 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" podStartSLOduration=120.971548146 podStartE2EDuration="2m0.971548146s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.941771275 +0000 UTC m=+146.087982441" watchObservedRunningTime="2025-11-26 07:02:53.971548146 +0000 UTC m=+146.117759312" Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.972791 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:53 crc kubenswrapper[4909]: E1126 07:02:53.973840 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.473806525 +0000 UTC m=+146.620017691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.974886 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" event={"ID":"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9","Type":"ContainerStarted","Data":"a882b1203af71bfb2ec37035497eae2b02c4970319eea5ad4c4e0321d8ead8ba"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.988431 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" event={"ID":"f2acf793-348d-40a3-8433-7c82d748228b","Type":"ContainerStarted","Data":"827b5b008c27d6dc307bdb5ab20cefd15acc7b1ad153113821827f9afe86920e"} Nov 26 07:02:53 crc kubenswrapper[4909]: I1126 07:02:53.989120 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.011419 4909 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p5hhl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.011475 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" podUID="f2acf793-348d-40a3-8433-7c82d748228b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.019210 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" event={"ID":"22ae4443-3879-489b-a556-474a11712c47","Type":"ContainerStarted","Data":"8026bdbb01dfb1c819e65d4aa6dc2cbddcf0fbd982ce96a1bf9ffc3da0faed06"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.026312 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fbn7q" podStartSLOduration=121.026294308 podStartE2EDuration="2m1.026294308s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:53.972146961 +0000 UTC m=+146.118358127" watchObservedRunningTime="2025-11-26 07:02:54.026294308 +0000 UTC m=+146.172505474" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.027500 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xt7tf" podStartSLOduration=121.027495741 podStartE2EDuration="2m1.027495741s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.012107432 +0000 UTC m=+146.158318598" watchObservedRunningTime="2025-11-26 07:02:54.027495741 +0000 UTC m=+146.173706907" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.063418 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7zgjj" event={"ID":"cd7f3942-e3a5-47ca-a9eb-becfaa64d62a","Type":"ContainerStarted","Data":"a6c5ef12bf56a73512425efeca1792908e67851813acf66691367a359516f24f"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.078981 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.079861 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.57985147 +0000 UTC m=+146.726062636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.080974 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" podStartSLOduration=121.080957429 podStartE2EDuration="2m1.080957429s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.078192106 +0000 UTC m=+146.224403272" watchObservedRunningTime="2025-11-26 07:02:54.080957429 +0000 UTC m=+146.227168595" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.084740 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" event={"ID":"11bcb7f4-f89c-4a95-824a-6388e3f69aa5","Type":"ContainerStarted","Data":"4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.086088 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.087422 4909 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c4h29 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.087462 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" podUID="11bcb7f4-f89c-4a95-824a-6388e3f69aa5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.116841 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" event={"ID":"de9cd657-3380-48f7-a15c-dd81cdecc57d","Type":"ContainerStarted","Data":"721529a326e75872ca2cadbce7b8f81f414c616938f77a9c65764c2bab2ecf1c"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.116897 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" event={"ID":"de9cd657-3380-48f7-a15c-dd81cdecc57d","Type":"ContainerStarted","Data":"d81bad605fca475f6b09c9671397f8b5e7367af83168f0c7f1da9ee1933e55a6"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.140009 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" podStartSLOduration=121.139991666 podStartE2EDuration="2m1.139991666s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.102621634 +0000 UTC m=+146.248832800" watchObservedRunningTime="2025-11-26 07:02:54.139991666 +0000 UTC m=+146.286202832" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.144820 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9g6s4" podStartSLOduration=121.144806724 podStartE2EDuration="2m1.144806724s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.14354921 +0000 UTC m=+146.289760376" watchObservedRunningTime="2025-11-26 07:02:54.144806724 +0000 UTC m=+146.291017890" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.167844 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" event={"ID":"54ec2236-5c8f-4d51-97d0-2145a8c91a0c","Type":"ContainerStarted","Data":"8b805588c7becf54d10fbdfe3d8d4b52ae1d664835957e3069fa77af6a3a2163"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.181740 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.182906 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.682891405 +0000 UTC m=+146.829102571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.184046 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.184894 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.184911 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" event={"ID":"c1f4c3b4-d536-424d-aecc-c1ea2228940f","Type":"ContainerStarted","Data":"bc0b02e8a19d46d0373a07612d69d71070686c41e522c7754c0757ead816b103"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.184947 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.204346 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rpddn" event={"ID":"7e4b01fa-600c-4784-877e-affbde07fb1d","Type":"ContainerStarted","Data":"c0486ffea9b92576390608daf39d533d0b2d88b20909f5692f358365c6124be0"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.227173 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" event={"ID":"bb0d84e2-45ac-4936-b267-d75214779f91","Type":"ContainerStarted","Data":"990e3c71e0cc915e99495b1a3d104f0444c98652ca52e595c317165f8ce4f749"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.227729 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.228204 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7zgjj" podStartSLOduration=121.228187297 podStartE2EDuration="2m1.228187297s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.168654767 +0000 UTC m=+146.314865933" watchObservedRunningTime="2025-11-26 07:02:54.228187297 +0000 UTC m=+146.374398453" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.228683 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" podStartSLOduration=121.228677789 podStartE2EDuration="2m1.228677789s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.227701604 +0000 UTC m=+146.373912770" watchObservedRunningTime="2025-11-26 07:02:54.228677789 +0000 UTC m=+146.374888956" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.240111 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" event={"ID":"480e0e98-6e8e-480e-bf79-fa4d6cba6582","Type":"ContainerStarted","Data":"46b420d961e2950c0892063ac4a519b7fda9b44315d535f57201f591ed61278f"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.260849 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" event={"ID":"9e4f7691-475b-4ab8-9c1b-8f482fe9424c","Type":"ContainerStarted","Data":"751c2926acb904d64ec68b65aae5faa7af0d06c0299264afe38b4abd7400acf6"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.269931 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zpp77" podStartSLOduration=121.269910134 podStartE2EDuration="2m1.269910134s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.266315279 +0000 UTC m=+146.412526445" watchObservedRunningTime="2025-11-26 07:02:54.269910134 +0000 UTC m=+146.416121290" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.283545 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.286214 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.786202347 +0000 UTC m=+146.932413513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.290416 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" event={"ID":"2bd06321-043f-48ec-a6d7-b19de03ffbf6","Type":"ContainerStarted","Data":"c2f1849d395bfa68412fb08e94f5a2bd2e13bf82c4e7b85a1bc43a543b217cbc"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.290456 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" event={"ID":"2bd06321-043f-48ec-a6d7-b19de03ffbf6","Type":"ContainerStarted","Data":"64f0de293e1b7753e07d8cc92ef2d734059ba6cc6c18a44ec05dfe20c98b0f74"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.298553 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" event={"ID":"6c118be1-d423-45f8-b280-e72a2773178d","Type":"ContainerStarted","Data":"314e3f6bb88335aec37b804221a9ce50a958940288bc84c162c3de86a4f33feb"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.300288 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.309345 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" event={"ID":"c027d7f7-ada5-4c58-a49f-b38cfe15c37a","Type":"ContainerStarted","Data":"7e3d953131817ac1ddc1a2171a22cbfb574f2ae956486d3ef3ad5b49c82af2f7"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.318632 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" podStartSLOduration=121.318574046 podStartE2EDuration="2m1.318574046s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.314334003 +0000 UTC m=+146.460545169" watchObservedRunningTime="2025-11-26 07:02:54.318574046 +0000 UTC m=+146.464785212" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.322958 4909 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kxkmb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.323001 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" podUID="6c118be1-d423-45f8-b280-e72a2773178d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.326363 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" event={"ID":"deb4b7f7-1494-4289-8ac8-fbfcef6c76e0","Type":"ContainerStarted","Data":"0e63f6dfe6c78db437ff337216a12426fe8dd6dd57a8a66e1feaa28f854c0421"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.331946 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" event={"ID":"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f","Type":"ContainerStarted","Data":"3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497"} Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.331992 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.332882 4909 patch_prober.go:28] interesting pod/downloads-7954f5f757-4cmcz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.333009 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4cmcz" podUID="cd48299d-0c3f-4475-b4f5-a00d85b71393" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.355780 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s9lpz" podStartSLOduration=121.355747413 podStartE2EDuration="2m1.355747413s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.347140294 +0000 UTC m=+146.493351480" watchObservedRunningTime="2025-11-26 07:02:54.355747413 +0000 UTC m=+146.501958579" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.356877 4909 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g6sfv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.356948 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" podUID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.387096 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.388914 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.888893293 +0000 UTC m=+147.035104459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.403179 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" podStartSLOduration=121.403161381 podStartE2EDuration="2m1.403161381s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.401781194 +0000 UTC m=+146.547992360" watchObservedRunningTime="2025-11-26 07:02:54.403161381 +0000 UTC m=+146.549372547" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.433391 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" podStartSLOduration=121.433375844 podStartE2EDuration="2m1.433375844s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.431889014 +0000 UTC m=+146.578100180" watchObservedRunningTime="2025-11-26 07:02:54.433375844 +0000 UTC m=+146.579587010" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.489819 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.490158 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:54.99014716 +0000 UTC m=+147.136358326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.526346 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4xv5r" podStartSLOduration=121.526328141 podStartE2EDuration="2m1.526328141s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.526012262 +0000 UTC m=+146.672223428" watchObservedRunningTime="2025-11-26 07:02:54.526328141 +0000 UTC m=+146.672539297" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.527249 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" podStartSLOduration=121.527242194 podStartE2EDuration="2m1.527242194s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.475210863 +0000 UTC m=+146.621422039" watchObservedRunningTime="2025-11-26 07:02:54.527242194 +0000 UTC m=+146.673453360" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.562906 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-klzpc" podStartSLOduration=121.562890751 podStartE2EDuration="2m1.562890751s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.553250284 +0000 UTC m=+146.699461440" watchObservedRunningTime="2025-11-26 07:02:54.562890751 +0000 UTC m=+146.709101917" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.591491 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.595020 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.094994593 +0000 UTC m=+147.241205749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.620503 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dgf4b" podStartSLOduration=121.620484459 podStartE2EDuration="2m1.620484459s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.617627353 +0000 UTC m=+146.763838519" watchObservedRunningTime="2025-11-26 07:02:54.620484459 +0000 UTC m=+146.766695625" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.685257 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p4gdd" podStartSLOduration=121.685240848 podStartE2EDuration="2m1.685240848s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.67401555 +0000 UTC m=+146.820226716" watchObservedRunningTime="2025-11-26 07:02:54.685240848 +0000 UTC m=+146.831452014" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.701011 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.701270 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.201258403 +0000 UTC m=+147.347469569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.802116 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.802557 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.302538361 +0000 UTC m=+147.448749527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.903805 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:54 crc kubenswrapper[4909]: E1126 07:02:54.904123 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.404113028 +0000 UTC m=+147.550324194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.924852 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgsgg" Nov 26 07:02:54 crc kubenswrapper[4909]: I1126 07:02:54.950003 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" podStartSLOduration=121.949976725 podStartE2EDuration="2m1.949976725s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:54.7587854 +0000 UTC m=+146.904996566" watchObservedRunningTime="2025-11-26 07:02:54.949976725 +0000 UTC m=+147.096187891" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.004669 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.005056 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.505041577 +0000 UTC m=+147.651252743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.105855 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.106211 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.606199512 +0000 UTC m=+147.752410678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.185626 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:02:55 crc kubenswrapper[4909]: [-]has-synced failed: reason withheld Nov 26 07:02:55 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:02:55 crc kubenswrapper[4909]: healthz check failed Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.185808 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.207153 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.207476 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.707461619 +0000 UTC m=+147.853672785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.308833 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.309133 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.809121957 +0000 UTC m=+147.955333113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.340610 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r8xvl" event={"ID":"e3a459d8-796b-4ad3-9a1f-21e7694eb4a9","Type":"ContainerStarted","Data":"fc35264d218c1d667094e012eae9dcae20a0f25b24e935a158f48ab9c5fd1e6b"} Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.340744 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-r8xvl" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.350966 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gvvmw" event={"ID":"54ec2236-5c8f-4d51-97d0-2145a8c91a0c","Type":"ContainerStarted","Data":"6044138f91b1345de21b0263a63c3d3eb4cf014d9ae265e1a86c9439b5d3beb2"} Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.369475 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-r8xvl" podStartSLOduration=8.369455619 podStartE2EDuration="8.369455619s" podCreationTimestamp="2025-11-26 07:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:55.366504031 +0000 UTC m=+147.512715197" watchObservedRunningTime="2025-11-26 07:02:55.369455619 +0000 UTC m=+147.515666785" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.370698 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" event={"ID":"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5","Type":"ContainerStarted","Data":"8011a9feaf00824932c5d8ee11e91e9444c884ae402dfcee5f4bad522410e10d"} Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.380683 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" event={"ID":"9f87108b-bbab-4f72-a974-0cb8d188890d","Type":"ContainerStarted","Data":"e8b90a190de5511e71b44b5e129daff1397fb7c3bd4ceb0acaf364dd43c5b180"} Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.387433 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm9vz" event={"ID":"9e4f7691-475b-4ab8-9c1b-8f482fe9424c","Type":"ContainerStarted","Data":"05366628317c5cc1b652eefd1eb8148554d4623ecc73da5fe3aa5d655e39e7c2"} Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.409697 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.409861 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.90983949 +0000 UTC m=+148.056050666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.409984 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.410700 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:55.910691703 +0000 UTC m=+148.056902869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.413130 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" event={"ID":"97e5a116-5615-4290-bee9-44f45f2433df","Type":"ContainerStarted","Data":"84bceffb4935c8a46c6b037a3b532b7a9861c80cd1acf940af05eda0153f581d"} Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.414923 4909 patch_prober.go:28] interesting pod/downloads-7954f5f757-4cmcz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.414959 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4cmcz" podUID="cd48299d-0c3f-4475-b4f5-a00d85b71393" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.416757 4909 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g6sfv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.416813 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" podUID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.425092 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" podStartSLOduration=122.425077645 podStartE2EDuration="2m2.425077645s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:55.424635683 +0000 UTC m=+147.570846849" watchObservedRunningTime="2025-11-26 07:02:55.425077645 +0000 UTC m=+147.571288811" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.438627 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.439565 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kxkmb" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.439729 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5hhl" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.490126 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.491667 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.511562 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.511681 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.011665043 +0000 UTC m=+148.157876209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.512083 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.514327 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.014317693 +0000 UTC m=+148.160528859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.621423 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.621636 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.121603071 +0000 UTC m=+148.267814237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.621745 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.622165 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.122154766 +0000 UTC m=+148.268365932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.723351 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.723721 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.223706651 +0000 UTC m=+148.369917817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.825468 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.826007 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.325991396 +0000 UTC m=+148.472202562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.926249 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.926426 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.426399481 +0000 UTC m=+148.572610647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:55 crc kubenswrapper[4909]: I1126 07:02:55.926598 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:55 crc kubenswrapper[4909]: E1126 07:02:55.926892 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.426883044 +0000 UTC m=+148.573094210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.027388 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.027962 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.527947487 +0000 UTC m=+148.674158643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.128673 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.128967 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.628955878 +0000 UTC m=+148.775167044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.184782 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:02:56 crc kubenswrapper[4909]: [-]has-synced failed: reason withheld Nov 26 07:02:56 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:02:56 crc kubenswrapper[4909]: healthz check failed Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.184827 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.229607 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.230094 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.730078432 +0000 UTC m=+148.876289598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.330819 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.331153 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.831141974 +0000 UTC m=+148.977353140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.377896 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.423994 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" event={"ID":"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5","Type":"ContainerStarted","Data":"97a810b6a4cc7b56a7b46e63d3b8aa51e233fbd5b6b46683718549067a838442"} Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.424035 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" event={"ID":"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5","Type":"ContainerStarted","Data":"c37d13a0baed36ab219d86fa732575a577058994cbf51e160f38d7b945a1e001"} Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.424047 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" event={"ID":"2c2c78bd-80a9-4543-b1d1-432d3a29d3e5","Type":"ContainerStarted","Data":"ecc8a74e3ff3429bfc0a0218c9944d67cf53d6840381ba4f4e382d44f5d52041"} Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.430660 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dn5tv" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.432166 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.432449 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:56.932436292 +0000 UTC m=+149.078647458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.439023 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xmhvc" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.533939 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.533988 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.534074 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.534230 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.534259 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.537348 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.037333357 +0000 UTC m=+149.183544523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.540182 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.541947 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.556269 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.557065 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.574621 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" podStartSLOduration=9.574604916 podStartE2EDuration="9.574604916s" podCreationTimestamp="2025-11-26 07:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:56.498300321 +0000 UTC m=+148.644511487" watchObservedRunningTime="2025-11-26 07:02:56.574604916 +0000 UTC m=+148.720816082" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.634913 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.635235 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.135220625 +0000 UTC m=+149.281431791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.635450 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.635761 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.135754549 +0000 UTC m=+149.281965715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.736632 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.736930 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.236915584 +0000 UTC m=+149.383126740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.766375 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kslpd"] Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.767297 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.773201 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.790560 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kslpd"] Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.813706 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.820756 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.853356 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.854263 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-catalog-content\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.854302 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpm6r\" (UniqueName: \"kubernetes.io/projected/595bc076-964b-4cf0-a307-688b3458164c-kube-api-access-kpm6r\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.854332 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.854392 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-utilities\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.854698 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.35468721 +0000 UTC m=+149.500898376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.953097 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bl49"] Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.953994 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.954970 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.955110 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-utilities\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.955151 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-catalog-content\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.955168 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpm6r\" (UniqueName: \"kubernetes.io/projected/595bc076-964b-4cf0-a307-688b3458164c-kube-api-access-kpm6r\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: E1126 07:02:56.955459 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.455445194 +0000 UTC m=+149.601656360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.956074 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-utilities\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.956380 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-catalog-content\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.956655 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.980003 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bl49"] Nov 26 07:02:56 crc kubenswrapper[4909]: I1126 07:02:56.999634 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpm6r\" (UniqueName: \"kubernetes.io/projected/595bc076-964b-4cf0-a307-688b3458164c-kube-api-access-kpm6r\") pod \"community-operators-kslpd\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.007560 4909 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.058394 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-utilities\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.058555 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.058657 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2l9\" (UniqueName: \"kubernetes.io/projected/e602dd02-2a76-453b-932d-3f670998c035-kube-api-access-7n2l9\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.058684 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-catalog-content\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: E1126 07:02:57.059061 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.559049134 +0000 UTC m=+149.705260290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.086870 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.152548 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqz4f"] Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.153756 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.162607 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.162766 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-utilities\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.162840 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n2l9\" (UniqueName: \"kubernetes.io/projected/e602dd02-2a76-453b-932d-3f670998c035-kube-api-access-7n2l9\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.162864 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-catalog-content\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.163246 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-catalog-content\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: E1126 07:02:57.163812 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.663798134 +0000 UTC m=+149.810009300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.164011 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-utilities\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.171797 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqz4f"] Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.182062 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n2l9\" (UniqueName: \"kubernetes.io/projected/e602dd02-2a76-453b-932d-3f670998c035-kube-api-access-7n2l9\") pod \"certified-operators-4bl49\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.265338 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.265383 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-utilities\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: E1126 07:02:57.265670 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.765658898 +0000 UTC m=+149.911870064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.265760 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2dj\" (UniqueName: \"kubernetes.io/projected/07aa73fa-a53a-4031-89dd-81c5db3e01ea-kube-api-access-6q2dj\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.265796 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-catalog-content\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.274847 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.344100 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:02:57 crc kubenswrapper[4909]: [-]has-synced failed: reason withheld Nov 26 07:02:57 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:02:57 crc kubenswrapper[4909]: healthz check failed Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.344147 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.367021 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.367281 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-utilities\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.367318 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2dj\" (UniqueName: \"kubernetes.io/projected/07aa73fa-a53a-4031-89dd-81c5db3e01ea-kube-api-access-6q2dj\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.367345 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-catalog-content\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.367803 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-catalog-content\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.368037 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-utilities\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: E1126 07:02:57.368470 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.868452736 +0000 UTC m=+150.014663902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.407548 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f587c"] Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.418883 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.426111 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2dj\" (UniqueName: \"kubernetes.io/projected/07aa73fa-a53a-4031-89dd-81c5db3e01ea-kube-api-access-6q2dj\") pod \"community-operators-wqz4f\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.445919 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f587c"] Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.472154 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:57 crc kubenswrapper[4909]: E1126 07:02:57.472450 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:57.972438486 +0000 UTC m=+150.118649652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.574320 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.574903 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjl8\" (UniqueName: \"kubernetes.io/projected/f60330e9-79bc-4851-9235-f8c4ff95ee96-kube-api-access-dsjl8\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.574951 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-catalog-content\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: E1126 07:02:57.575064 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 07:02:58.075034149 +0000 UTC m=+150.221245345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.575178 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-utilities\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: W1126 07:02:57.607806 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-a02570fe303f2166a8d98b00c81c7c36b47897b183a797f6df732e26d695abb9 WatchSource:0}: Error finding container a02570fe303f2166a8d98b00c81c7c36b47897b183a797f6df732e26d695abb9: Status 404 returned error can't find the container with id a02570fe303f2166a8d98b00c81c7c36b47897b183a797f6df732e26d695abb9 Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.676543 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsjl8\" (UniqueName: \"kubernetes.io/projected/f60330e9-79bc-4851-9235-f8c4ff95ee96-kube-api-access-dsjl8\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.676610 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-catalog-content\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.676662 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.676702 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-utilities\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.677267 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-utilities\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.677792 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-catalog-content\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: E1126 07:02:57.678013 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 07:02:58.177999752 +0000 UTC m=+150.324210918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wqlmg" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.683689 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.686261 4909 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-26T07:02:57.007581468Z","Handler":null,"Name":""} Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.708262 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsjl8\" (UniqueName: \"kubernetes.io/projected/f60330e9-79bc-4851-9235-f8c4ff95ee96-kube-api-access-dsjl8\") pod \"certified-operators-f587c\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.708731 4909 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.708759 4909 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.745751 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kslpd"] Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.771970 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.779365 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.803228 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.857945 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bl49"] Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.880321 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.887506 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.887550 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:57 crc kubenswrapper[4909]: I1126 07:02:57.921170 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wqlmg\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.029525 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.110972 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqz4f"] Nov 26 07:02:58 crc kubenswrapper[4909]: W1126 07:02:58.125828 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07aa73fa_a53a_4031_89dd_81c5db3e01ea.slice/crio-b5f4821ff94ddf4915ce2d1ff027939dc255fa846c8db2900e63423dd5629983 WatchSource:0}: Error finding container b5f4821ff94ddf4915ce2d1ff027939dc255fa846c8db2900e63423dd5629983: Status 404 returned error can't find the container with id b5f4821ff94ddf4915ce2d1ff027939dc255fa846c8db2900e63423dd5629983 Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.187832 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:02:58 crc kubenswrapper[4909]: [-]has-synced failed: reason withheld Nov 26 07:02:58 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:02:58 crc kubenswrapper[4909]: healthz check failed Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.187903 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.246012 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f587c"] Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.255488 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wqlmg"] Nov 26 07:02:58 crc kubenswrapper[4909]: W1126 07:02:58.262289 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03e3a595_33da_47a5_ba74_cb7c535134ca.slice/crio-7d9f137e482a429bae52df64157bdb58b89d25dca6f51affacee0fa28bbfb306 WatchSource:0}: Error finding container 7d9f137e482a429bae52df64157bdb58b89d25dca6f51affacee0fa28bbfb306: Status 404 returned error can't find the container with id 7d9f137e482a429bae52df64157bdb58b89d25dca6f51affacee0fa28bbfb306 Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.436306 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8bfa2a1b5cfe0789b86fa2c64f8c2aa9b6e589a0c6aed4b2ceecaf1cb604e6f7"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.436627 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a02570fe303f2166a8d98b00c81c7c36b47897b183a797f6df732e26d695abb9"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.439187 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" event={"ID":"03e3a595-33da-47a5-ba74-cb7c535134ca","Type":"ContainerStarted","Data":"94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.439219 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" event={"ID":"03e3a595-33da-47a5-ba74-cb7c535134ca","Type":"ContainerStarted","Data":"7d9f137e482a429bae52df64157bdb58b89d25dca6f51affacee0fa28bbfb306"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.439572 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.441031 4909 generic.go:334] "Generic (PLEG): container finished" podID="e602dd02-2a76-453b-932d-3f670998c035" containerID="34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861" exitCode=0 Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.441107 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bl49" event={"ID":"e602dd02-2a76-453b-932d-3f670998c035","Type":"ContainerDied","Data":"34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.441126 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bl49" event={"ID":"e602dd02-2a76-453b-932d-3f670998c035","Type":"ContainerStarted","Data":"51234d0274bae9d969013a9143f2137284debf49c59d37d97dc6ee291605f3e3"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.442951 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.449684 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f3650bcf85cf882d0bad0ee2a3314f003e266264e746e5657c9e5825459e4a83"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.449722 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"048e1a58e24c42b7f30255079162a5413d8932b6945277361ed25215454aa8d5"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.460170 4909 generic.go:334] "Generic (PLEG): container finished" podID="595bc076-964b-4cf0-a307-688b3458164c" containerID="06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7" exitCode=0 Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.460239 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kslpd" event={"ID":"595bc076-964b-4cf0-a307-688b3458164c","Type":"ContainerDied","Data":"06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.460265 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kslpd" event={"ID":"595bc076-964b-4cf0-a307-688b3458164c","Type":"ContainerStarted","Data":"78f6cdaeeffec025aec8ebe630db85fbb45006b20f9852bf4e4cbb4a42400868"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.461974 4909 generic.go:334] "Generic (PLEG): container finished" podID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerID="0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006" exitCode=0 Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.461998 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4f" event={"ID":"07aa73fa-a53a-4031-89dd-81c5db3e01ea","Type":"ContainerDied","Data":"0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.462026 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4f" event={"ID":"07aa73fa-a53a-4031-89dd-81c5db3e01ea","Type":"ContainerStarted","Data":"b5f4821ff94ddf4915ce2d1ff027939dc255fa846c8db2900e63423dd5629983"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.464344 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"295db7268d45287e0a1d1a444c9579a012040363fdb2d789266b33edd2d77eb5"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.465125 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a9272c5f3d13b05fe0f06751d5a18692f2b5419339736e481b18fd4a9c257cf4"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.465331 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.467687 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerStarted","Data":"ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.467755 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerStarted","Data":"7ac2d6d4619671a02d6d5f8e4532d111b9a99f082e08b9298390b8bed758f3d7"} Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.487719 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" podStartSLOduration=125.487696723 podStartE2EDuration="2m5.487696723s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:02:58.485883575 +0000 UTC m=+150.632094741" watchObservedRunningTime="2025-11-26 07:02:58.487696723 +0000 UTC m=+150.633907889" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.531503 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.954776 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k62sn"] Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.956351 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.958777 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.965700 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k62sn"] Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.997163 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlsv9\" (UniqueName: \"kubernetes.io/projected/aabdf0c7-5fdc-4103-beab-05890462e3e2-kube-api-access-qlsv9\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.997538 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-utilities\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:58 crc kubenswrapper[4909]: I1126 07:02:58.997622 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-catalog-content\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.099139 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlsv9\" (UniqueName: \"kubernetes.io/projected/aabdf0c7-5fdc-4103-beab-05890462e3e2-kube-api-access-qlsv9\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.099175 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-utilities\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.099237 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-catalog-content\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.099910 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-catalog-content\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.100115 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-utilities\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.126371 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlsv9\" (UniqueName: \"kubernetes.io/projected/aabdf0c7-5fdc-4103-beab-05890462e3e2-kube-api-access-qlsv9\") pod \"redhat-marketplace-k62sn\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.187012 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:02:59 crc kubenswrapper[4909]: [-]has-synced failed: reason withheld Nov 26 07:02:59 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:02:59 crc kubenswrapper[4909]: healthz check failed Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.187085 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.302713 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.304234 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.305051 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.306727 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.307206 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.351516 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.400792 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-db4mw"] Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.402264 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/164ecf33-50f7-404d-915e-cc17d8eb6c71-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.402336 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/164ecf33-50f7-404d-915e-cc17d8eb6c71-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.402644 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.404148 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4mw"] Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.495069 4909 generic.go:334] "Generic (PLEG): container finished" podID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerID="ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772" exitCode=0 Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.495167 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerDied","Data":"ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772"} Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.497288 4909 generic.go:334] "Generic (PLEG): container finished" podID="adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" containerID="a882b1203af71bfb2ec37035497eae2b02c4970319eea5ad4c4e0321d8ead8ba" exitCode=0 Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.497543 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" event={"ID":"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9","Type":"ContainerDied","Data":"a882b1203af71bfb2ec37035497eae2b02c4970319eea5ad4c4e0321d8ead8ba"} Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.503091 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-utilities\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.503170 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/164ecf33-50f7-404d-915e-cc17d8eb6c71-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.503190 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt48r\" (UniqueName: \"kubernetes.io/projected/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-kube-api-access-xt48r\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.503239 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-catalog-content\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.503279 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/164ecf33-50f7-404d-915e-cc17d8eb6c71-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.503430 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/164ecf33-50f7-404d-915e-cc17d8eb6c71-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.528294 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/164ecf33-50f7-404d-915e-cc17d8eb6c71-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.599248 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k62sn"] Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.604906 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-utilities\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.605037 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt48r\" (UniqueName: \"kubernetes.io/projected/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-kube-api-access-xt48r\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.605210 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-catalog-content\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.605438 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-utilities\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.606150 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-catalog-content\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.625779 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt48r\" (UniqueName: \"kubernetes.io/projected/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-kube-api-access-xt48r\") pod \"redhat-marketplace-db4mw\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: W1126 07:02:59.638728 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaabdf0c7_5fdc_4103_beab_05890462e3e2.slice/crio-defda3376eef9d1868d2553030255cf28fb7d7e3b23bd502862a571eb1236f2e WatchSource:0}: Error finding container defda3376eef9d1868d2553030255cf28fb7d7e3b23bd502862a571eb1236f2e: Status 404 returned error can't find the container with id defda3376eef9d1868d2553030255cf28fb7d7e3b23bd502862a571eb1236f2e Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.700480 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.726313 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-r4q2l" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.729417 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.783662 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.786346 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.796775 4909 patch_prober.go:28] interesting pod/console-f9d7485db-f7bmk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.796895 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-f7bmk" podUID="32133cc3-d6eb-48c5-a3fc-11e820ed8a48" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.797432 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.797795 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.805002 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.821387 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.946359 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5k95k"] Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.947806 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.954799 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 07:02:59 crc kubenswrapper[4909]: I1126 07:02:59.973330 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5k95k"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.014997 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-catalog-content\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.015057 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-utilities\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.015130 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdcjf\" (UniqueName: \"kubernetes.io/projected/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-kube-api-access-tdcjf\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.080055 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.080899 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.082633 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.082705 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.117228 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdcjf\" (UniqueName: \"kubernetes.io/projected/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-kube-api-access-tdcjf\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.117292 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.117320 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-catalog-content\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.117344 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-utilities\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.117385 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.118021 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-catalog-content\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.118228 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-utilities\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.125906 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.139905 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdcjf\" (UniqueName: \"kubernetes.io/projected/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-kube-api-access-tdcjf\") pod \"redhat-operators-5k95k\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.142078 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4mw"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.191821 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:03:00 crc kubenswrapper[4909]: [-]has-synced failed: reason withheld Nov 26 07:03:00 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:03:00 crc kubenswrapper[4909]: healthz check failed Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.192122 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.219997 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.220106 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.220185 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.244490 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.270926 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 07:03:00 crc kubenswrapper[4909]: W1126 07:03:00.281733 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod164ecf33_50f7_404d_915e_cc17d8eb6c71.slice/crio-dc5d568b22fea1a051ab7edc70694f7f56acff96fdbb96492aa5a9dcd0fc3f0d WatchSource:0}: Error finding container dc5d568b22fea1a051ab7edc70694f7f56acff96fdbb96492aa5a9dcd0fc3f0d: Status 404 returned error can't find the container with id dc5d568b22fea1a051ab7edc70694f7f56acff96fdbb96492aa5a9dcd0fc3f0d Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.296167 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.353173 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7stcn"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.354368 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.373215 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7stcn"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.419366 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.423474 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-catalog-content\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.423534 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-utilities\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.423635 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdq7b\" (UniqueName: \"kubernetes.io/projected/ff1983ba-304c-41d3-a747-88631e6e5c0f-kube-api-access-qdq7b\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.430295 4909 patch_prober.go:28] interesting pod/downloads-7954f5f757-4cmcz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.430337 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4cmcz" podUID="cd48299d-0c3f-4475-b4f5-a00d85b71393" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.431970 4909 patch_prober.go:28] interesting pod/downloads-7954f5f757-4cmcz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.432075 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4cmcz" podUID="cd48299d-0c3f-4475-b4f5-a00d85b71393" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.524797 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-utilities\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.525181 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdq7b\" (UniqueName: \"kubernetes.io/projected/ff1983ba-304c-41d3-a747-88631e6e5c0f-kube-api-access-qdq7b\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.525276 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-catalog-content\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.525300 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-utilities\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.525556 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-catalog-content\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.551058 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdq7b\" (UniqueName: \"kubernetes.io/projected/ff1983ba-304c-41d3-a747-88631e6e5c0f-kube-api-access-qdq7b\") pod \"redhat-operators-7stcn\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.569106 4909 generic.go:334] "Generic (PLEG): container finished" podID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerID="2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4" exitCode=0 Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.569175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4mw" event={"ID":"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed","Type":"ContainerDied","Data":"2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4"} Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.569207 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4mw" event={"ID":"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed","Type":"ContainerStarted","Data":"911e506fe44f0ab715b870a8e052a2c8c218f145f50535d901562517364ee3fb"} Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.572973 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"164ecf33-50f7-404d-915e-cc17d8eb6c71","Type":"ContainerStarted","Data":"dc5d568b22fea1a051ab7edc70694f7f56acff96fdbb96492aa5a9dcd0fc3f0d"} Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.587460 4909 generic.go:334] "Generic (PLEG): container finished" podID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerID="53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c" exitCode=0 Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.588890 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k62sn" event={"ID":"aabdf0c7-5fdc-4103-beab-05890462e3e2","Type":"ContainerDied","Data":"53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c"} Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.588923 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k62sn" event={"ID":"aabdf0c7-5fdc-4103-beab-05890462e3e2","Type":"ContainerStarted","Data":"defda3376eef9d1868d2553030255cf28fb7d7e3b23bd502862a571eb1236f2e"} Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.598301 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7mqds" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.691412 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.807726 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5k95k"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.918178 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 07:03:00 crc kubenswrapper[4909]: I1126 07:03:00.970070 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.012345 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.052262 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d5ks\" (UniqueName: \"kubernetes.io/projected/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-kube-api-access-9d5ks\") pod \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.052386 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-secret-volume\") pod \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.052449 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-config-volume\") pod \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\" (UID: \"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9\") " Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.055236 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-config-volume" (OuterVolumeSpecName: "config-volume") pod "adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" (UID: "adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.060007 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-kube-api-access-9d5ks" (OuterVolumeSpecName: "kube-api-access-9d5ks") pod "adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" (UID: "adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9"). InnerVolumeSpecName "kube-api-access-9d5ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.063830 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" (UID: "adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.156568 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d5ks\" (UniqueName: \"kubernetes.io/projected/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-kube-api-access-9d5ks\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.157740 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.157758 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.182801 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.186100 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:03:01 crc kubenswrapper[4909]: [-]has-synced failed: reason withheld Nov 26 07:03:01 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:03:01 crc kubenswrapper[4909]: healthz check failed Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.186133 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.500222 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7stcn"] Nov 26 07:03:01 crc kubenswrapper[4909]: W1126 07:03:01.505355 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff1983ba_304c_41d3_a747_88631e6e5c0f.slice/crio-721a52a110eb5471ac2ad582aa5bb5075db7fe417e2bd18ae127daa20eae6f05 WatchSource:0}: Error finding container 721a52a110eb5471ac2ad582aa5bb5075db7fe417e2bd18ae127daa20eae6f05: Status 404 returned error can't find the container with id 721a52a110eb5471ac2ad582aa5bb5075db7fe417e2bd18ae127daa20eae6f05 Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.598656 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"164ecf33-50f7-404d-915e-cc17d8eb6c71","Type":"ContainerStarted","Data":"3145547835b3babece58578b17a858a309ed6a05da3be566943943531e19ab22"} Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.614084 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7stcn" event={"ID":"ff1983ba-304c-41d3-a747-88631e6e5c0f","Type":"ContainerStarted","Data":"721a52a110eb5471ac2ad582aa5bb5075db7fe417e2bd18ae127daa20eae6f05"} Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.625743 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.625725643 podStartE2EDuration="2.625725643s" podCreationTimestamp="2025-11-26 07:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:03:01.624954313 +0000 UTC m=+153.771165479" watchObservedRunningTime="2025-11-26 07:03:01.625725643 +0000 UTC m=+153.771936809" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.635113 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" event={"ID":"adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9","Type":"ContainerDied","Data":"4ba79f6932c8d57c83c07a94db61e7fd808d4c1a82e4d28f85b6e804c5bcdeb6"} Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.635158 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba79f6932c8d57c83c07a94db61e7fd808d4c1a82e4d28f85b6e804c5bcdeb6" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.635730 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7" Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.637970 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e3aa18b4-93d7-4cf2-91b2-26c526555a56","Type":"ContainerStarted","Data":"cd0817f6aba2d5720ef508366d6780f8c261f3cec0f746d7d4dfb343ec61bc08"} Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.644310 4909 generic.go:334] "Generic (PLEG): container finished" podID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerID="b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08" exitCode=0 Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.645583 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k95k" event={"ID":"28f2292b-c3f2-4e25-ad1d-45a4b78bebff","Type":"ContainerDied","Data":"b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08"} Nov 26 07:03:01 crc kubenswrapper[4909]: I1126 07:03:01.646357 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k95k" event={"ID":"28f2292b-c3f2-4e25-ad1d-45a4b78bebff","Type":"ContainerStarted","Data":"8b8b5c4545913445de6e90fea5ea948756aefb662ba044026d5b3b8c4dbe80ed"} Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.186386 4909 patch_prober.go:28] interesting pod/router-default-5444994796-7zgjj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 07:03:02 crc kubenswrapper[4909]: [+]has-synced ok Nov 26 07:03:02 crc kubenswrapper[4909]: [+]process-running ok Nov 26 07:03:02 crc kubenswrapper[4909]: healthz check failed Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.186486 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7zgjj" podUID="cd7f3942-e3a5-47ca-a9eb-becfaa64d62a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.656789 4909 generic.go:334] "Generic (PLEG): container finished" podID="e3aa18b4-93d7-4cf2-91b2-26c526555a56" containerID="a16c4510412520a8badbf784ae006f65018ef814cc983a2d92eff91c7291893e" exitCode=0 Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.657062 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e3aa18b4-93d7-4cf2-91b2-26c526555a56","Type":"ContainerDied","Data":"a16c4510412520a8badbf784ae006f65018ef814cc983a2d92eff91c7291893e"} Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.661101 4909 generic.go:334] "Generic (PLEG): container finished" podID="164ecf33-50f7-404d-915e-cc17d8eb6c71" containerID="3145547835b3babece58578b17a858a309ed6a05da3be566943943531e19ab22" exitCode=0 Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.661149 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"164ecf33-50f7-404d-915e-cc17d8eb6c71","Type":"ContainerDied","Data":"3145547835b3babece58578b17a858a309ed6a05da3be566943943531e19ab22"} Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.663087 4909 generic.go:334] "Generic (PLEG): container finished" podID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerID="31fbd4073976753e8ec86bd0c230aec1cac701e00f050b0d70869b409b04495d" exitCode=0 Nov 26 07:03:02 crc kubenswrapper[4909]: I1126 07:03:02.664145 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7stcn" event={"ID":"ff1983ba-304c-41d3-a747-88631e6e5c0f","Type":"ContainerDied","Data":"31fbd4073976753e8ec86bd0c230aec1cac701e00f050b0d70869b409b04495d"} Nov 26 07:03:03 crc kubenswrapper[4909]: I1126 07:03:03.126260 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-r8xvl" Nov 26 07:03:03 crc kubenswrapper[4909]: I1126 07:03:03.217865 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:03:03 crc kubenswrapper[4909]: I1126 07:03:03.221718 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7zgjj" Nov 26 07:03:03 crc kubenswrapper[4909]: I1126 07:03:03.971149 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:03 crc kubenswrapper[4909]: I1126 07:03:03.977529 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.132083 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kube-api-access\") pod \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.132202 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/164ecf33-50f7-404d-915e-cc17d8eb6c71-kube-api-access\") pod \"164ecf33-50f7-404d-915e-cc17d8eb6c71\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.133815 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/164ecf33-50f7-404d-915e-cc17d8eb6c71-kubelet-dir\") pod \"164ecf33-50f7-404d-915e-cc17d8eb6c71\" (UID: \"164ecf33-50f7-404d-915e-cc17d8eb6c71\") " Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.134306 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kubelet-dir\") pod \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\" (UID: \"e3aa18b4-93d7-4cf2-91b2-26c526555a56\") " Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.134544 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/164ecf33-50f7-404d-915e-cc17d8eb6c71-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "164ecf33-50f7-404d-915e-cc17d8eb6c71" (UID: "164ecf33-50f7-404d-915e-cc17d8eb6c71"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.134674 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e3aa18b4-93d7-4cf2-91b2-26c526555a56" (UID: "e3aa18b4-93d7-4cf2-91b2-26c526555a56"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.142889 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e3aa18b4-93d7-4cf2-91b2-26c526555a56" (UID: "e3aa18b4-93d7-4cf2-91b2-26c526555a56"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.160519 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164ecf33-50f7-404d-915e-cc17d8eb6c71-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "164ecf33-50f7-404d-915e-cc17d8eb6c71" (UID: "164ecf33-50f7-404d-915e-cc17d8eb6c71"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.237828 4909 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/164ecf33-50f7-404d-915e-cc17d8eb6c71-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.237878 4909 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.238888 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa18b4-93d7-4cf2-91b2-26c526555a56-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.238934 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/164ecf33-50f7-404d-915e-cc17d8eb6c71-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.694092 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"164ecf33-50f7-404d-915e-cc17d8eb6c71","Type":"ContainerDied","Data":"dc5d568b22fea1a051ab7edc70694f7f56acff96fdbb96492aa5a9dcd0fc3f0d"} Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.694148 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5d568b22fea1a051ab7edc70694f7f56acff96fdbb96492aa5a9dcd0fc3f0d" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.694216 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.702292 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e3aa18b4-93d7-4cf2-91b2-26c526555a56","Type":"ContainerDied","Data":"cd0817f6aba2d5720ef508366d6780f8c261f3cec0f746d7d4dfb343ec61bc08"} Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.702342 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 07:03:04 crc kubenswrapper[4909]: I1126 07:03:04.702356 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd0817f6aba2d5720ef508366d6780f8c261f3cec0f746d7d4dfb343ec61bc08" Nov 26 07:03:07 crc kubenswrapper[4909]: I1126 07:03:07.300548 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:03:07 crc kubenswrapper[4909]: I1126 07:03:07.300847 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:03:09 crc kubenswrapper[4909]: I1126 07:03:09.794799 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:03:09 crc kubenswrapper[4909]: I1126 07:03:09.802233 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:03:10 crc kubenswrapper[4909]: I1126 07:03:10.430584 4909 patch_prober.go:28] interesting pod/downloads-7954f5f757-4cmcz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 26 07:03:10 crc kubenswrapper[4909]: I1126 07:03:10.430625 4909 patch_prober.go:28] interesting pod/downloads-7954f5f757-4cmcz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 26 07:03:10 crc kubenswrapper[4909]: I1126 07:03:10.430658 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4cmcz" podUID="cd48299d-0c3f-4475-b4f5-a00d85b71393" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 26 07:03:10 crc kubenswrapper[4909]: I1126 07:03:10.430668 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4cmcz" podUID="cd48299d-0c3f-4475-b4f5-a00d85b71393" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 26 07:03:15 crc kubenswrapper[4909]: I1126 07:03:15.972448 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:03:15 crc kubenswrapper[4909]: I1126 07:03:15.978146 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e91888f-077f-4be0-a258-568bde5c10bd-metrics-certs\") pod \"network-metrics-daemon-8llwb\" (UID: \"6e91888f-077f-4be0-a258-568bde5c10bd\") " pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:03:16 crc kubenswrapper[4909]: I1126 07:03:16.119520 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8llwb" Nov 26 07:03:18 crc kubenswrapper[4909]: I1126 07:03:18.036101 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:03:20 crc kubenswrapper[4909]: I1126 07:03:20.452481 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4cmcz" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.057806 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.058565 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4bl49_openshift-marketplace(e602dd02-2a76-453b-932d-3f670998c035): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.059890 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4bl49" podUID="e602dd02-2a76-453b-932d-3f670998c035" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.846234 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.846373 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlsv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-k62sn_openshift-marketplace(aabdf0c7-5fdc-4103-beab-05890462e3e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.847516 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-k62sn" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.870467 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.870648 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsjl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-f587c_openshift-marketplace(f60330e9-79bc-4851-9235-f8c4ff95ee96): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 07:03:29 crc kubenswrapper[4909]: E1126 07:03:29.871961 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-f587c" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" Nov 26 07:03:30 crc kubenswrapper[4909]: I1126 07:03:30.458420 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tsxsb" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.277165 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-k62sn" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.277192 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4bl49" podUID="e602dd02-2a76-453b-932d-3f670998c035" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.277254 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-f587c" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.367166 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.367210 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.367510 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q2dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wqz4f_openshift-marketplace(07aa73fa-a53a-4031-89dd-81c5db3e01ea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.367546 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xt48r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-db4mw_openshift-marketplace(7e88b90d-4bc8-40a1-94bf-ac42f7a78eed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.368666 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wqz4f" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.368692 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-db4mw" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.387412 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.387576 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpm6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kslpd_openshift-marketplace(595bc076-964b-4cf0-a307-688b3458164c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.388736 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kslpd" podUID="595bc076-964b-4cf0-a307-688b3458164c" Nov 26 07:03:31 crc kubenswrapper[4909]: I1126 07:03:31.705909 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8llwb"] Nov 26 07:03:31 crc kubenswrapper[4909]: I1126 07:03:31.859658 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8llwb" event={"ID":"6e91888f-077f-4be0-a258-568bde5c10bd","Type":"ContainerStarted","Data":"0d57cb2551d43a58ecaa3657410a8674a0499d03bd18afa4c61b034e5782ece8"} Nov 26 07:03:31 crc kubenswrapper[4909]: I1126 07:03:31.862213 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7stcn" event={"ID":"ff1983ba-304c-41d3-a747-88631e6e5c0f","Type":"ContainerStarted","Data":"b45194c6288fd4f8d280e49f42e8a47943a4d7c974f5e7ce6d62d0c181185caf"} Nov 26 07:03:31 crc kubenswrapper[4909]: I1126 07:03:31.865875 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k95k" event={"ID":"28f2292b-c3f2-4e25-ad1d-45a4b78bebff","Type":"ContainerStarted","Data":"cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b"} Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.867303 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-db4mw" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.867830 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kslpd" podUID="595bc076-964b-4cf0-a307-688b3458164c" Nov 26 07:03:31 crc kubenswrapper[4909]: E1126 07:03:31.868424 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wqz4f" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" Nov 26 07:03:32 crc kubenswrapper[4909]: I1126 07:03:32.883654 4909 generic.go:334] "Generic (PLEG): container finished" podID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerID="cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b" exitCode=0 Nov 26 07:03:32 crc kubenswrapper[4909]: I1126 07:03:32.884329 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k95k" event={"ID":"28f2292b-c3f2-4e25-ad1d-45a4b78bebff","Type":"ContainerDied","Data":"cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b"} Nov 26 07:03:32 crc kubenswrapper[4909]: I1126 07:03:32.890409 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8llwb" event={"ID":"6e91888f-077f-4be0-a258-568bde5c10bd","Type":"ContainerStarted","Data":"a2072ef29708e269c0bd6f97cfc3e2b893cf4568fa539c084ce786e874aceb8d"} Nov 26 07:03:32 crc kubenswrapper[4909]: I1126 07:03:32.890464 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8llwb" event={"ID":"6e91888f-077f-4be0-a258-568bde5c10bd","Type":"ContainerStarted","Data":"5438260e670b10d529e3465483e96fe89b68757defc222e3387d7ad70a17690d"} Nov 26 07:03:32 crc kubenswrapper[4909]: I1126 07:03:32.892547 4909 generic.go:334] "Generic (PLEG): container finished" podID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerID="b45194c6288fd4f8d280e49f42e8a47943a4d7c974f5e7ce6d62d0c181185caf" exitCode=0 Nov 26 07:03:32 crc kubenswrapper[4909]: I1126 07:03:32.892584 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7stcn" event={"ID":"ff1983ba-304c-41d3-a747-88631e6e5c0f","Type":"ContainerDied","Data":"b45194c6288fd4f8d280e49f42e8a47943a4d7c974f5e7ce6d62d0c181185caf"} Nov 26 07:03:32 crc kubenswrapper[4909]: I1126 07:03:32.940946 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8llwb" podStartSLOduration=159.940927197 podStartE2EDuration="2m39.940927197s" podCreationTimestamp="2025-11-26 07:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:03:32.920844044 +0000 UTC m=+185.067055200" watchObservedRunningTime="2025-11-26 07:03:32.940927197 +0000 UTC m=+185.087138363" Nov 26 07:03:33 crc kubenswrapper[4909]: I1126 07:03:33.900870 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7stcn" event={"ID":"ff1983ba-304c-41d3-a747-88631e6e5c0f","Type":"ContainerStarted","Data":"b96eca09469f0a3da24d7822cf22d0a6b8bf2609bd5672ef4b317290b5d5187e"} Nov 26 07:03:33 crc kubenswrapper[4909]: I1126 07:03:33.904910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k95k" event={"ID":"28f2292b-c3f2-4e25-ad1d-45a4b78bebff","Type":"ContainerStarted","Data":"948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c"} Nov 26 07:03:33 crc kubenswrapper[4909]: I1126 07:03:33.923778 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7stcn" podStartSLOduration=3.13306259 podStartE2EDuration="33.923758303s" podCreationTimestamp="2025-11-26 07:03:00 +0000 UTC" firstStartedPulling="2025-11-26 07:03:02.665639224 +0000 UTC m=+154.811850390" lastFinishedPulling="2025-11-26 07:03:33.456334937 +0000 UTC m=+185.602546103" observedRunningTime="2025-11-26 07:03:33.92098838 +0000 UTC m=+186.067199576" watchObservedRunningTime="2025-11-26 07:03:33.923758303 +0000 UTC m=+186.069969469" Nov 26 07:03:33 crc kubenswrapper[4909]: I1126 07:03:33.939635 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5k95k" podStartSLOduration=3.174244653 podStartE2EDuration="34.939615685s" podCreationTimestamp="2025-11-26 07:02:59 +0000 UTC" firstStartedPulling="2025-11-26 07:03:01.647242545 +0000 UTC m=+153.793453711" lastFinishedPulling="2025-11-26 07:03:33.412613577 +0000 UTC m=+185.558824743" observedRunningTime="2025-11-26 07:03:33.937715994 +0000 UTC m=+186.083927170" watchObservedRunningTime="2025-11-26 07:03:33.939615685 +0000 UTC m=+186.085826851" Nov 26 07:03:36 crc kubenswrapper[4909]: I1126 07:03:36.869247 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 07:03:37 crc kubenswrapper[4909]: I1126 07:03:37.300432 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:03:37 crc kubenswrapper[4909]: I1126 07:03:37.300489 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:03:38 crc kubenswrapper[4909]: I1126 07:03:38.742255 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-68dmw"] Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.297165 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.297469 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.418958 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.691523 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.691884 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.749003 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.977481 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:40 crc kubenswrapper[4909]: I1126 07:03:40.984038 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:03:41 crc kubenswrapper[4909]: I1126 07:03:41.647720 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7stcn"] Nov 26 07:03:42 crc kubenswrapper[4909]: I1126 07:03:42.952947 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7stcn" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="registry-server" containerID="cri-o://b96eca09469f0a3da24d7822cf22d0a6b8bf2609bd5672ef4b317290b5d5187e" gracePeriod=2 Nov 26 07:03:44 crc kubenswrapper[4909]: I1126 07:03:44.964670 4909 generic.go:334] "Generic (PLEG): container finished" podID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerID="b96eca09469f0a3da24d7822cf22d0a6b8bf2609bd5672ef4b317290b5d5187e" exitCode=0 Nov 26 07:03:44 crc kubenswrapper[4909]: I1126 07:03:44.964739 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7stcn" event={"ID":"ff1983ba-304c-41d3-a747-88631e6e5c0f","Type":"ContainerDied","Data":"b96eca09469f0a3da24d7822cf22d0a6b8bf2609bd5672ef4b317290b5d5187e"} Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.222267 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.412040 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-catalog-content\") pod \"ff1983ba-304c-41d3-a747-88631e6e5c0f\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.412166 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-utilities\") pod \"ff1983ba-304c-41d3-a747-88631e6e5c0f\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.412201 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdq7b\" (UniqueName: \"kubernetes.io/projected/ff1983ba-304c-41d3-a747-88631e6e5c0f-kube-api-access-qdq7b\") pod \"ff1983ba-304c-41d3-a747-88631e6e5c0f\" (UID: \"ff1983ba-304c-41d3-a747-88631e6e5c0f\") " Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.412874 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-utilities" (OuterVolumeSpecName: "utilities") pod "ff1983ba-304c-41d3-a747-88631e6e5c0f" (UID: "ff1983ba-304c-41d3-a747-88631e6e5c0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.417502 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1983ba-304c-41d3-a747-88631e6e5c0f-kube-api-access-qdq7b" (OuterVolumeSpecName: "kube-api-access-qdq7b") pod "ff1983ba-304c-41d3-a747-88631e6e5c0f" (UID: "ff1983ba-304c-41d3-a747-88631e6e5c0f"). InnerVolumeSpecName "kube-api-access-qdq7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.513828 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.513851 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdq7b\" (UniqueName: \"kubernetes.io/projected/ff1983ba-304c-41d3-a747-88631e6e5c0f-kube-api-access-qdq7b\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.900949 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff1983ba-304c-41d3-a747-88631e6e5c0f" (UID: "ff1983ba-304c-41d3-a747-88631e6e5c0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.919243 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1983ba-304c-41d3-a747-88631e6e5c0f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.972932 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7stcn" event={"ID":"ff1983ba-304c-41d3-a747-88631e6e5c0f","Type":"ContainerDied","Data":"721a52a110eb5471ac2ad582aa5bb5075db7fe417e2bd18ae127daa20eae6f05"} Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.972985 4909 scope.go:117] "RemoveContainer" containerID="b96eca09469f0a3da24d7822cf22d0a6b8bf2609bd5672ef4b317290b5d5187e" Nov 26 07:03:45 crc kubenswrapper[4909]: I1126 07:03:45.973810 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7stcn" Nov 26 07:03:46 crc kubenswrapper[4909]: I1126 07:03:46.003936 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7stcn"] Nov 26 07:03:46 crc kubenswrapper[4909]: I1126 07:03:46.009985 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7stcn"] Nov 26 07:03:46 crc kubenswrapper[4909]: I1126 07:03:46.184579 4909 scope.go:117] "RemoveContainer" containerID="b45194c6288fd4f8d280e49f42e8a47943a4d7c974f5e7ce6d62d0c181185caf" Nov 26 07:03:46 crc kubenswrapper[4909]: I1126 07:03:46.203634 4909 scope.go:117] "RemoveContainer" containerID="31fbd4073976753e8ec86bd0c230aec1cac701e00f050b0d70869b409b04495d" Nov 26 07:03:46 crc kubenswrapper[4909]: I1126 07:03:46.504734 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" path="/var/lib/kubelet/pods/ff1983ba-304c-41d3-a747-88631e6e5c0f/volumes" Nov 26 07:03:46 crc kubenswrapper[4909]: I1126 07:03:46.985691 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerStarted","Data":"a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e"} Nov 26 07:03:47 crc kubenswrapper[4909]: I1126 07:03:47.993834 4909 generic.go:334] "Generic (PLEG): container finished" podID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerID="a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e" exitCode=0 Nov 26 07:03:47 crc kubenswrapper[4909]: I1126 07:03:47.993887 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerDied","Data":"a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e"} Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.007394 4909 generic.go:334] "Generic (PLEG): container finished" podID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerID="dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e" exitCode=0 Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.007419 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4mw" event={"ID":"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed","Type":"ContainerDied","Data":"dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e"} Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.012829 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerStarted","Data":"16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a"} Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.014987 4909 generic.go:334] "Generic (PLEG): container finished" podID="e602dd02-2a76-453b-932d-3f670998c035" containerID="bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f" exitCode=0 Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.015047 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bl49" event={"ID":"e602dd02-2a76-453b-932d-3f670998c035","Type":"ContainerDied","Data":"bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f"} Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.016656 4909 generic.go:334] "Generic (PLEG): container finished" podID="595bc076-964b-4cf0-a307-688b3458164c" containerID="f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff" exitCode=0 Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.016711 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kslpd" event={"ID":"595bc076-964b-4cf0-a307-688b3458164c","Type":"ContainerDied","Data":"f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff"} Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.018475 4909 generic.go:334] "Generic (PLEG): container finished" podID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerID="707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f" exitCode=0 Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.018520 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k62sn" event={"ID":"aabdf0c7-5fdc-4103-beab-05890462e3e2","Type":"ContainerDied","Data":"707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f"} Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.021073 4909 generic.go:334] "Generic (PLEG): container finished" podID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerID="3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38" exitCode=0 Nov 26 07:03:49 crc kubenswrapper[4909]: I1126 07:03:49.021096 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4f" event={"ID":"07aa73fa-a53a-4031-89dd-81c5db3e01ea","Type":"ContainerDied","Data":"3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38"} Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.037448 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4mw" event={"ID":"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed","Type":"ContainerStarted","Data":"3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c"} Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.039807 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bl49" event={"ID":"e602dd02-2a76-453b-932d-3f670998c035","Type":"ContainerStarted","Data":"a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532"} Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.041794 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kslpd" event={"ID":"595bc076-964b-4cf0-a307-688b3458164c","Type":"ContainerStarted","Data":"5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0"} Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.044455 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k62sn" event={"ID":"aabdf0c7-5fdc-4103-beab-05890462e3e2","Type":"ContainerStarted","Data":"3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019"} Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.046813 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4f" event={"ID":"07aa73fa-a53a-4031-89dd-81c5db3e01ea","Type":"ContainerStarted","Data":"227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf"} Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.060701 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f587c" podStartSLOduration=5.113997897 podStartE2EDuration="55.060680868s" podCreationTimestamp="2025-11-26 07:02:57 +0000 UTC" firstStartedPulling="2025-11-26 07:02:58.47099932 +0000 UTC m=+150.617210486" lastFinishedPulling="2025-11-26 07:03:48.417682291 +0000 UTC m=+200.563893457" observedRunningTime="2025-11-26 07:03:49.108061704 +0000 UTC m=+201.254272880" watchObservedRunningTime="2025-11-26 07:03:52.060680868 +0000 UTC m=+204.206892044" Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.080449 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bl49" podStartSLOduration=3.124214808 podStartE2EDuration="56.080428361s" podCreationTimestamp="2025-11-26 07:02:56 +0000 UTC" firstStartedPulling="2025-11-26 07:02:58.442696759 +0000 UTC m=+150.588907925" lastFinishedPulling="2025-11-26 07:03:51.398910312 +0000 UTC m=+203.545121478" observedRunningTime="2025-11-26 07:03:52.078207282 +0000 UTC m=+204.224418448" watchObservedRunningTime="2025-11-26 07:03:52.080428361 +0000 UTC m=+204.226639527" Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.083876 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-db4mw" podStartSLOduration=2.174822876 podStartE2EDuration="53.083867282s" podCreationTimestamp="2025-11-26 07:02:59 +0000 UTC" firstStartedPulling="2025-11-26 07:03:00.571005088 +0000 UTC m=+152.717216254" lastFinishedPulling="2025-11-26 07:03:51.480049464 +0000 UTC m=+203.626260660" observedRunningTime="2025-11-26 07:03:52.05965622 +0000 UTC m=+204.205867386" watchObservedRunningTime="2025-11-26 07:03:52.083867282 +0000 UTC m=+204.230078448" Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.098165 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k62sn" podStartSLOduration=3.021229193 podStartE2EDuration="54.098142221s" podCreationTimestamp="2025-11-26 07:02:58 +0000 UTC" firstStartedPulling="2025-11-26 07:03:00.599351291 +0000 UTC m=+152.745562457" lastFinishedPulling="2025-11-26 07:03:51.676264319 +0000 UTC m=+203.822475485" observedRunningTime="2025-11-26 07:03:52.096858216 +0000 UTC m=+204.243069392" watchObservedRunningTime="2025-11-26 07:03:52.098142221 +0000 UTC m=+204.244353387" Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.147757 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kslpd" podStartSLOduration=3.184872149 podStartE2EDuration="56.147735757s" podCreationTimestamp="2025-11-26 07:02:56 +0000 UTC" firstStartedPulling="2025-11-26 07:02:58.461705664 +0000 UTC m=+150.607916850" lastFinishedPulling="2025-11-26 07:03:51.424569292 +0000 UTC m=+203.570780458" observedRunningTime="2025-11-26 07:03:52.144738387 +0000 UTC m=+204.290949563" watchObservedRunningTime="2025-11-26 07:03:52.147735757 +0000 UTC m=+204.293946923" Nov 26 07:03:52 crc kubenswrapper[4909]: I1126 07:03:52.149571 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqz4f" podStartSLOduration=1.8596549900000001 podStartE2EDuration="55.149561945s" podCreationTimestamp="2025-11-26 07:02:57 +0000 UTC" firstStartedPulling="2025-11-26 07:02:58.463808479 +0000 UTC m=+150.610019645" lastFinishedPulling="2025-11-26 07:03:51.753715444 +0000 UTC m=+203.899926600" observedRunningTime="2025-11-26 07:03:52.124974922 +0000 UTC m=+204.271186088" watchObservedRunningTime="2025-11-26 07:03:52.149561945 +0000 UTC m=+204.295773111" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.261524 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2qlc6"] Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.261824 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" podUID="0f21f776-e2f4-41e5-bdb9-6639817afa17" containerName="controller-manager" containerID="cri-o://cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1" gracePeriod=30 Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.341072 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29"] Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.341499 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" podUID="11bcb7f4-f89c-4a95-824a-6388e3f69aa5" containerName="route-controller-manager" containerID="cri-o://4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85" gracePeriod=30 Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.624897 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.677440 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.718028 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-proxy-ca-bundles\") pod \"0f21f776-e2f4-41e5-bdb9-6639817afa17\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.718116 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-config\") pod \"0f21f776-e2f4-41e5-bdb9-6639817afa17\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.718138 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f21f776-e2f4-41e5-bdb9-6639817afa17-serving-cert\") pod \"0f21f776-e2f4-41e5-bdb9-6639817afa17\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.718187 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfgxr\" (UniqueName: \"kubernetes.io/projected/0f21f776-e2f4-41e5-bdb9-6639817afa17-kube-api-access-dfgxr\") pod \"0f21f776-e2f4-41e5-bdb9-6639817afa17\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.718216 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-client-ca\") pod \"0f21f776-e2f4-41e5-bdb9-6639817afa17\" (UID: \"0f21f776-e2f4-41e5-bdb9-6639817afa17\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.719069 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-client-ca" (OuterVolumeSpecName: "client-ca") pod "0f21f776-e2f4-41e5-bdb9-6639817afa17" (UID: "0f21f776-e2f4-41e5-bdb9-6639817afa17"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.719321 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0f21f776-e2f4-41e5-bdb9-6639817afa17" (UID: "0f21f776-e2f4-41e5-bdb9-6639817afa17"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.719684 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-config" (OuterVolumeSpecName: "config") pod "0f21f776-e2f4-41e5-bdb9-6639817afa17" (UID: "0f21f776-e2f4-41e5-bdb9-6639817afa17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.725113 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f21f776-e2f4-41e5-bdb9-6639817afa17-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0f21f776-e2f4-41e5-bdb9-6639817afa17" (UID: "0f21f776-e2f4-41e5-bdb9-6639817afa17"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.728204 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f21f776-e2f4-41e5-bdb9-6639817afa17-kube-api-access-dfgxr" (OuterVolumeSpecName: "kube-api-access-dfgxr") pod "0f21f776-e2f4-41e5-bdb9-6639817afa17" (UID: "0f21f776-e2f4-41e5-bdb9-6639817afa17"). InnerVolumeSpecName "kube-api-access-dfgxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819174 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-serving-cert\") pod \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819240 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-client-ca\") pod \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819285 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk492\" (UniqueName: \"kubernetes.io/projected/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-kube-api-access-jk492\") pod \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819344 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-config\") pod \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\" (UID: \"11bcb7f4-f89c-4a95-824a-6388e3f69aa5\") " Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819576 4909 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819587 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819610 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f21f776-e2f4-41e5-bdb9-6639817afa17-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819622 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfgxr\" (UniqueName: \"kubernetes.io/projected/0f21f776-e2f4-41e5-bdb9-6639817afa17-kube-api-access-dfgxr\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.819631 4909 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f21f776-e2f4-41e5-bdb9-6639817afa17-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.820220 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-config" (OuterVolumeSpecName: "config") pod "11bcb7f4-f89c-4a95-824a-6388e3f69aa5" (UID: "11bcb7f4-f89c-4a95-824a-6388e3f69aa5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.820249 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-client-ca" (OuterVolumeSpecName: "client-ca") pod "11bcb7f4-f89c-4a95-824a-6388e3f69aa5" (UID: "11bcb7f4-f89c-4a95-824a-6388e3f69aa5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.823654 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-kube-api-access-jk492" (OuterVolumeSpecName: "kube-api-access-jk492") pod "11bcb7f4-f89c-4a95-824a-6388e3f69aa5" (UID: "11bcb7f4-f89c-4a95-824a-6388e3f69aa5"). InnerVolumeSpecName "kube-api-access-jk492". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.825037 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "11bcb7f4-f89c-4a95-824a-6388e3f69aa5" (UID: "11bcb7f4-f89c-4a95-824a-6388e3f69aa5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.920384 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.920430 4909 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.920444 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk492\" (UniqueName: \"kubernetes.io/projected/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-kube-api-access-jk492\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:53 crc kubenswrapper[4909]: I1126 07:03:53.920457 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcb7f4-f89c-4a95-824a-6388e3f69aa5-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.057461 4909 generic.go:334] "Generic (PLEG): container finished" podID="0f21f776-e2f4-41e5-bdb9-6639817afa17" containerID="cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1" exitCode=0 Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.057513 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.057530 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" event={"ID":"0f21f776-e2f4-41e5-bdb9-6639817afa17","Type":"ContainerDied","Data":"cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1"} Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.057918 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2qlc6" event={"ID":"0f21f776-e2f4-41e5-bdb9-6639817afa17","Type":"ContainerDied","Data":"a1a36d0fc7ab1cc5e53105f759c1af76ec3b7df67d755f7b5e00eef5f4bd134d"} Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.057935 4909 scope.go:117] "RemoveContainer" containerID="cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.059502 4909 generic.go:334] "Generic (PLEG): container finished" podID="11bcb7f4-f89c-4a95-824a-6388e3f69aa5" containerID="4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85" exitCode=0 Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.059563 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" event={"ID":"11bcb7f4-f89c-4a95-824a-6388e3f69aa5","Type":"ContainerDied","Data":"4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85"} Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.059619 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" event={"ID":"11bcb7f4-f89c-4a95-824a-6388e3f69aa5","Type":"ContainerDied","Data":"7b02cdf75db6dbbf480f6257a1bec91694fae72609bd4bb5f96c049b327a076d"} Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.059699 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.090249 4909 scope.go:117] "RemoveContainer" containerID="cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.090429 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2qlc6"] Nov 26 07:03:54 crc kubenswrapper[4909]: E1126 07:03:54.090700 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1\": container with ID starting with cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1 not found: ID does not exist" containerID="cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.090738 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1"} err="failed to get container status \"cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1\": rpc error: code = NotFound desc = could not find container \"cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1\": container with ID starting with cd3523dcb15ddecebc29d30286e4fe3840b7fcea125e3b55bc7d030df73dbcd1 not found: ID does not exist" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.090780 4909 scope.go:117] "RemoveContainer" containerID="4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.094400 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2qlc6"] Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.103326 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29"] Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.107092 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c4h29"] Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.111398 4909 scope.go:117] "RemoveContainer" containerID="4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85" Nov 26 07:03:54 crc kubenswrapper[4909]: E1126 07:03:54.111922 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85\": container with ID starting with 4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85 not found: ID does not exist" containerID="4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.111965 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85"} err="failed to get container status \"4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85\": rpc error: code = NotFound desc = could not find container \"4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85\": container with ID starting with 4bec1c0a4349043f208f731e67af64b42768f0007303280f1d4276a7b3ab2c85 not found: ID does not exist" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.507784 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f21f776-e2f4-41e5-bdb9-6639817afa17" path="/var/lib/kubelet/pods/0f21f776-e2f4-41e5-bdb9-6639817afa17/volumes" Nov 26 07:03:54 crc kubenswrapper[4909]: I1126 07:03:54.508774 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11bcb7f4-f89c-4a95-824a-6388e3f69aa5" path="/var/lib/kubelet/pods/11bcb7f4-f89c-4a95-824a-6388e3f69aa5/volumes" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.170892 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2"] Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171359 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bcb7f4-f89c-4a95-824a-6388e3f69aa5" containerName="route-controller-manager" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171401 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bcb7f4-f89c-4a95-824a-6388e3f69aa5" containerName="route-controller-manager" Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171438 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164ecf33-50f7-404d-915e-cc17d8eb6c71" containerName="pruner" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171455 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="164ecf33-50f7-404d-915e-cc17d8eb6c71" containerName="pruner" Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171488 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3aa18b4-93d7-4cf2-91b2-26c526555a56" containerName="pruner" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171509 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3aa18b4-93d7-4cf2-91b2-26c526555a56" containerName="pruner" Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171549 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f21f776-e2f4-41e5-bdb9-6639817afa17" containerName="controller-manager" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171566 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f21f776-e2f4-41e5-bdb9-6639817afa17" containerName="controller-manager" Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171583 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" containerName="collect-profiles" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171660 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" containerName="collect-profiles" Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171685 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="extract-content" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171703 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="extract-content" Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171733 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="extract-utilities" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171766 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="extract-utilities" Nov 26 07:03:55 crc kubenswrapper[4909]: E1126 07:03:55.171794 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="registry-server" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.171811 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="registry-server" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.174881 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" containerName="collect-profiles" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.174940 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1983ba-304c-41d3-a747-88631e6e5c0f" containerName="registry-server" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.174962 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="11bcb7f4-f89c-4a95-824a-6388e3f69aa5" containerName="route-controller-manager" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.174982 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="164ecf33-50f7-404d-915e-cc17d8eb6c71" containerName="pruner" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.174998 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f21f776-e2f4-41e5-bdb9-6639817afa17" containerName="controller-manager" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.175008 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3aa18b4-93d7-4cf2-91b2-26c526555a56" containerName="pruner" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.175668 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.191073 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.191907 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9454896b9-n45wd"] Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.193349 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.195720 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.196076 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.196342 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.196823 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.197206 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.202226 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.203407 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.203917 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.204297 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.205224 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9454896b9-n45wd"] Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.205966 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.206173 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.209181 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2"] Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.252026 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339134 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-client-ca\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339200 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-config\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339220 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-config\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339235 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-client-ca\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339252 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r649z\" (UniqueName: \"kubernetes.io/projected/46c61e04-7aae-4069-8b67-28caf0e4abc5-kube-api-access-r649z\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339278 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6lx\" (UniqueName: \"kubernetes.io/projected/48851490-e671-4a89-a0ee-ed5a5aeb1813-kube-api-access-5n6lx\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339351 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-proxy-ca-bundles\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339395 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c61e04-7aae-4069-8b67-28caf0e4abc5-serving-cert\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.339429 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48851490-e671-4a89-a0ee-ed5a5aeb1813-serving-cert\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440672 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-config\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440771 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r649z\" (UniqueName: \"kubernetes.io/projected/46c61e04-7aae-4069-8b67-28caf0e4abc5-kube-api-access-r649z\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440800 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-config\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440823 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-client-ca\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440861 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6lx\" (UniqueName: \"kubernetes.io/projected/48851490-e671-4a89-a0ee-ed5a5aeb1813-kube-api-access-5n6lx\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440893 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-proxy-ca-bundles\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440926 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c61e04-7aae-4069-8b67-28caf0e4abc5-serving-cert\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.440966 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48851490-e671-4a89-a0ee-ed5a5aeb1813-serving-cert\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.441022 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-client-ca\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.442508 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-client-ca\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.442541 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-config\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.442584 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-client-ca\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.443286 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-proxy-ca-bundles\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.445781 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-config\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.447745 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48851490-e671-4a89-a0ee-ed5a5aeb1813-serving-cert\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.448281 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c61e04-7aae-4069-8b67-28caf0e4abc5-serving-cert\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.457224 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6lx\" (UniqueName: \"kubernetes.io/projected/48851490-e671-4a89-a0ee-ed5a5aeb1813-kube-api-access-5n6lx\") pod \"route-controller-manager-5948cb894c-t8hk2\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.465400 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r649z\" (UniqueName: \"kubernetes.io/projected/46c61e04-7aae-4069-8b67-28caf0e4abc5-kube-api-access-r649z\") pod \"controller-manager-9454896b9-n45wd\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.520128 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.530785 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.748259 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2"] Nov 26 07:03:55 crc kubenswrapper[4909]: I1126 07:03:55.779646 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9454896b9-n45wd"] Nov 26 07:03:55 crc kubenswrapper[4909]: W1126 07:03:55.789170 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c61e04_7aae_4069_8b67_28caf0e4abc5.slice/crio-7688e94a5d8faa626386e1afea25d3492ea2091819a0e89d3597dd19ae73c8d8 WatchSource:0}: Error finding container 7688e94a5d8faa626386e1afea25d3492ea2091819a0e89d3597dd19ae73c8d8: Status 404 returned error can't find the container with id 7688e94a5d8faa626386e1afea25d3492ea2091819a0e89d3597dd19ae73c8d8 Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.074061 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" event={"ID":"48851490-e671-4a89-a0ee-ed5a5aeb1813","Type":"ContainerStarted","Data":"fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225"} Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.074104 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" event={"ID":"48851490-e671-4a89-a0ee-ed5a5aeb1813","Type":"ContainerStarted","Data":"e2a6e12b7168c9945cb211525e5880c8e57cec3d8796454986977837a17175ad"} Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.074428 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.076399 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" event={"ID":"46c61e04-7aae-4069-8b67-28caf0e4abc5","Type":"ContainerStarted","Data":"c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2"} Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.076452 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" event={"ID":"46c61e04-7aae-4069-8b67-28caf0e4abc5","Type":"ContainerStarted","Data":"7688e94a5d8faa626386e1afea25d3492ea2091819a0e89d3597dd19ae73c8d8"} Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.076631 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.081643 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.125982 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" podStartSLOduration=3.125968997 podStartE2EDuration="3.125968997s" podCreationTimestamp="2025-11-26 07:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:03:56.122028462 +0000 UTC m=+208.268239638" watchObservedRunningTime="2025-11-26 07:03:56.125968997 +0000 UTC m=+208.272180163" Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.359366 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:03:56 crc kubenswrapper[4909]: I1126 07:03:56.376931 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" podStartSLOduration=3.3768841529999998 podStartE2EDuration="3.376884153s" podCreationTimestamp="2025-11-26 07:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:03:56.146853441 +0000 UTC m=+208.293064617" watchObservedRunningTime="2025-11-26 07:03:56.376884153 +0000 UTC m=+208.523095319" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.088460 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.088978 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.135273 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.276053 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.276170 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.317329 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.684393 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.684476 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.738080 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.773797 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.774209 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:03:57 crc kubenswrapper[4909]: I1126 07:03:57.815981 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:03:58 crc kubenswrapper[4909]: I1126 07:03:58.123377 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:03:58 crc kubenswrapper[4909]: I1126 07:03:58.130346 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:03:58 crc kubenswrapper[4909]: I1126 07:03:58.134856 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:03:58 crc kubenswrapper[4909]: I1126 07:03:58.138990 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.053460 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqz4f"] Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.303471 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.303623 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.377443 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.647755 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f587c"] Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.731478 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.731546 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:03:59 crc kubenswrapper[4909]: I1126 07:03:59.776309 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.097980 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqz4f" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="registry-server" containerID="cri-o://227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf" gracePeriod=2 Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.143712 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.145055 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.550062 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.715574 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q2dj\" (UniqueName: \"kubernetes.io/projected/07aa73fa-a53a-4031-89dd-81c5db3e01ea-kube-api-access-6q2dj\") pod \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.715652 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-catalog-content\") pod \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.715751 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-utilities\") pod \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\" (UID: \"07aa73fa-a53a-4031-89dd-81c5db3e01ea\") " Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.716822 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-utilities" (OuterVolumeSpecName: "utilities") pod "07aa73fa-a53a-4031-89dd-81c5db3e01ea" (UID: "07aa73fa-a53a-4031-89dd-81c5db3e01ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.720842 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07aa73fa-a53a-4031-89dd-81c5db3e01ea-kube-api-access-6q2dj" (OuterVolumeSpecName: "kube-api-access-6q2dj") pod "07aa73fa-a53a-4031-89dd-81c5db3e01ea" (UID: "07aa73fa-a53a-4031-89dd-81c5db3e01ea"). InnerVolumeSpecName "kube-api-access-6q2dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.817684 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q2dj\" (UniqueName: \"kubernetes.io/projected/07aa73fa-a53a-4031-89dd-81c5db3e01ea-kube-api-access-6q2dj\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:00 crc kubenswrapper[4909]: I1126 07:04:00.817723 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.103372 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4f" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.103386 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4f" event={"ID":"07aa73fa-a53a-4031-89dd-81c5db3e01ea","Type":"ContainerDied","Data":"227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf"} Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.103424 4909 scope.go:117] "RemoveContainer" containerID="227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.103261 4909 generic.go:334] "Generic (PLEG): container finished" podID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerID="227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf" exitCode=0 Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.103734 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4f" event={"ID":"07aa73fa-a53a-4031-89dd-81c5db3e01ea","Type":"ContainerDied","Data":"b5f4821ff94ddf4915ce2d1ff027939dc255fa846c8db2900e63423dd5629983"} Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.104021 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f587c" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="registry-server" containerID="cri-o://16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a" gracePeriod=2 Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.127510 4909 scope.go:117] "RemoveContainer" containerID="3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.155373 4909 scope.go:117] "RemoveContainer" containerID="0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.170823 4909 scope.go:117] "RemoveContainer" containerID="227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf" Nov 26 07:04:01 crc kubenswrapper[4909]: E1126 07:04:01.171229 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf\": container with ID starting with 227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf not found: ID does not exist" containerID="227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.171261 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf"} err="failed to get container status \"227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf\": rpc error: code = NotFound desc = could not find container \"227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf\": container with ID starting with 227ce5520e5ee216e03e609baf3b5e4598066bf60b38ca5773838ab70db6c1bf not found: ID does not exist" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.171282 4909 scope.go:117] "RemoveContainer" containerID="3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38" Nov 26 07:04:01 crc kubenswrapper[4909]: E1126 07:04:01.171532 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38\": container with ID starting with 3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38 not found: ID does not exist" containerID="3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.171555 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38"} err="failed to get container status \"3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38\": rpc error: code = NotFound desc = could not find container \"3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38\": container with ID starting with 3d30635138f666bc5c09a3c7085bf488645908a41f419a39a1d7cad937973f38 not found: ID does not exist" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.171568 4909 scope.go:117] "RemoveContainer" containerID="0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006" Nov 26 07:04:01 crc kubenswrapper[4909]: E1126 07:04:01.171818 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006\": container with ID starting with 0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006 not found: ID does not exist" containerID="0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.171838 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006"} err="failed to get container status \"0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006\": rpc error: code = NotFound desc = could not find container \"0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006\": container with ID starting with 0b8c4fccba0784cbb4ef6369a2c12e29dec21159b31d20d5f66f8be12ea40006 not found: ID does not exist" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.261306 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07aa73fa-a53a-4031-89dd-81c5db3e01ea" (UID: "07aa73fa-a53a-4031-89dd-81c5db3e01ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.323676 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07aa73fa-a53a-4031-89dd-81c5db3e01ea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.432150 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqz4f"] Nov 26 07:04:01 crc kubenswrapper[4909]: I1126 07:04:01.435433 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqz4f"] Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.048524 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4mw"] Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.075778 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.111353 4909 generic.go:334] "Generic (PLEG): container finished" podID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerID="16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a" exitCode=0 Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.111444 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerDied","Data":"16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a"} Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.111495 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f587c" event={"ID":"f60330e9-79bc-4851-9235-f8c4ff95ee96","Type":"ContainerDied","Data":"7ac2d6d4619671a02d6d5f8e4532d111b9a99f082e08b9298390b8bed758f3d7"} Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.111463 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f587c" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.111524 4909 scope.go:117] "RemoveContainer" containerID="16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.112756 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-db4mw" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="registry-server" containerID="cri-o://3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c" gracePeriod=2 Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.130959 4909 scope.go:117] "RemoveContainer" containerID="a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.155829 4909 scope.go:117] "RemoveContainer" containerID="ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.221671 4909 scope.go:117] "RemoveContainer" containerID="16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a" Nov 26 07:04:02 crc kubenswrapper[4909]: E1126 07:04:02.222995 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a\": container with ID starting with 16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a not found: ID does not exist" containerID="16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.223036 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a"} err="failed to get container status \"16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a\": rpc error: code = NotFound desc = could not find container \"16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a\": container with ID starting with 16c1b154616b0b666bdb3932d38a2b7ad4c83a9f3b27eb342ace24a122be8b3a not found: ID does not exist" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.223062 4909 scope.go:117] "RemoveContainer" containerID="a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e" Nov 26 07:04:02 crc kubenswrapper[4909]: E1126 07:04:02.223705 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e\": container with ID starting with a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e not found: ID does not exist" containerID="a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.223771 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e"} err="failed to get container status \"a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e\": rpc error: code = NotFound desc = could not find container \"a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e\": container with ID starting with a47b91a728c4685117c27e8e8cd785e456aac2b7c9b3a9a2e2be28a24dba7e0e not found: ID does not exist" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.223968 4909 scope.go:117] "RemoveContainer" containerID="ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772" Nov 26 07:04:02 crc kubenswrapper[4909]: E1126 07:04:02.224683 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772\": container with ID starting with ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772 not found: ID does not exist" containerID="ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.224749 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772"} err="failed to get container status \"ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772\": rpc error: code = NotFound desc = could not find container \"ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772\": container with ID starting with ec4ec41067596868070fcbc5980eb9169cb4d29af5843e21d24cf68344268772 not found: ID does not exist" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.236583 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-utilities\") pod \"f60330e9-79bc-4851-9235-f8c4ff95ee96\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.236720 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-catalog-content\") pod \"f60330e9-79bc-4851-9235-f8c4ff95ee96\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.236800 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsjl8\" (UniqueName: \"kubernetes.io/projected/f60330e9-79bc-4851-9235-f8c4ff95ee96-kube-api-access-dsjl8\") pod \"f60330e9-79bc-4851-9235-f8c4ff95ee96\" (UID: \"f60330e9-79bc-4851-9235-f8c4ff95ee96\") " Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.237555 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-utilities" (OuterVolumeSpecName: "utilities") pod "f60330e9-79bc-4851-9235-f8c4ff95ee96" (UID: "f60330e9-79bc-4851-9235-f8c4ff95ee96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.243585 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60330e9-79bc-4851-9235-f8c4ff95ee96-kube-api-access-dsjl8" (OuterVolumeSpecName: "kube-api-access-dsjl8") pod "f60330e9-79bc-4851-9235-f8c4ff95ee96" (UID: "f60330e9-79bc-4851-9235-f8c4ff95ee96"). InnerVolumeSpecName "kube-api-access-dsjl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.289483 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f60330e9-79bc-4851-9235-f8c4ff95ee96" (UID: "f60330e9-79bc-4851-9235-f8c4ff95ee96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.338374 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsjl8\" (UniqueName: \"kubernetes.io/projected/f60330e9-79bc-4851-9235-f8c4ff95ee96-kube-api-access-dsjl8\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.338406 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.338419 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60330e9-79bc-4851-9235-f8c4ff95ee96-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.444156 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f587c"] Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.449979 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f587c"] Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.510710 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" path="/var/lib/kubelet/pods/07aa73fa-a53a-4031-89dd-81c5db3e01ea/volumes" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.511299 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" path="/var/lib/kubelet/pods/f60330e9-79bc-4851-9235-f8c4ff95ee96/volumes" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.559216 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.642334 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt48r\" (UniqueName: \"kubernetes.io/projected/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-kube-api-access-xt48r\") pod \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.642437 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-utilities\") pod \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.642681 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-catalog-content\") pod \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\" (UID: \"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed\") " Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.644111 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-utilities" (OuterVolumeSpecName: "utilities") pod "7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" (UID: "7e88b90d-4bc8-40a1-94bf-ac42f7a78eed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.645039 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-kube-api-access-xt48r" (OuterVolumeSpecName: "kube-api-access-xt48r") pod "7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" (UID: "7e88b90d-4bc8-40a1-94bf-ac42f7a78eed"). InnerVolumeSpecName "kube-api-access-xt48r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.663411 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" (UID: "7e88b90d-4bc8-40a1-94bf-ac42f7a78eed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.744314 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.744361 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt48r\" (UniqueName: \"kubernetes.io/projected/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-kube-api-access-xt48r\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:02 crc kubenswrapper[4909]: I1126 07:04:02.744378 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.120880 4909 generic.go:334] "Generic (PLEG): container finished" podID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerID="3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c" exitCode=0 Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.120975 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4mw" event={"ID":"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed","Type":"ContainerDied","Data":"3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c"} Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.121013 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4mw" event={"ID":"7e88b90d-4bc8-40a1-94bf-ac42f7a78eed","Type":"ContainerDied","Data":"911e506fe44f0ab715b870a8e052a2c8c218f145f50535d901562517364ee3fb"} Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.121036 4909 scope.go:117] "RemoveContainer" containerID="3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.121032 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4mw" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.141978 4909 scope.go:117] "RemoveContainer" containerID="dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.151151 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4mw"] Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.155662 4909 scope.go:117] "RemoveContainer" containerID="2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.156345 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4mw"] Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.184588 4909 scope.go:117] "RemoveContainer" containerID="3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c" Nov 26 07:04:03 crc kubenswrapper[4909]: E1126 07:04:03.185055 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c\": container with ID starting with 3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c not found: ID does not exist" containerID="3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.185103 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c"} err="failed to get container status \"3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c\": rpc error: code = NotFound desc = could not find container \"3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c\": container with ID starting with 3511062df87d741d2ad7c2716afadc9086b68333ce123355eaca8064b32aae6c not found: ID does not exist" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.185139 4909 scope.go:117] "RemoveContainer" containerID="dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e" Nov 26 07:04:03 crc kubenswrapper[4909]: E1126 07:04:03.185498 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e\": container with ID starting with dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e not found: ID does not exist" containerID="dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.185547 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e"} err="failed to get container status \"dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e\": rpc error: code = NotFound desc = could not find container \"dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e\": container with ID starting with dbd8962677d168de044f64cd7ea3bb689a7f8d79b107cee28e140ef49279051e not found: ID does not exist" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.185586 4909 scope.go:117] "RemoveContainer" containerID="2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4" Nov 26 07:04:03 crc kubenswrapper[4909]: E1126 07:04:03.185923 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4\": container with ID starting with 2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4 not found: ID does not exist" containerID="2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.185949 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4"} err="failed to get container status \"2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4\": rpc error: code = NotFound desc = could not find container \"2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4\": container with ID starting with 2c956ed820923a14d3cdbc37b3838334c79300e4b50a328280bb4135107e05f4 not found: ID does not exist" Nov 26 07:04:03 crc kubenswrapper[4909]: I1126 07:04:03.776964 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" podUID="36375488-d0da-488c-b0ac-1e4f63490cbd" containerName="oauth-openshift" containerID="cri-o://dd6baaa31c0b9557fb5c3890b51ddd7e5d10c7baf570e3033a67219d9ecac23b" gracePeriod=15 Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.131006 4909 generic.go:334] "Generic (PLEG): container finished" podID="36375488-d0da-488c-b0ac-1e4f63490cbd" containerID="dd6baaa31c0b9557fb5c3890b51ddd7e5d10c7baf570e3033a67219d9ecac23b" exitCode=0 Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.131187 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" event={"ID":"36375488-d0da-488c-b0ac-1e4f63490cbd","Type":"ContainerDied","Data":"dd6baaa31c0b9557fb5c3890b51ddd7e5d10c7baf570e3033a67219d9ecac23b"} Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.259570 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.367176 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-idp-0-file-data\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.367232 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-policies\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.367252 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-login\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.367282 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-serving-cert\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.367316 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-error\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.367337 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-trusted-ca-bundle\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.367357 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-router-certs\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368079 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368146 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368206 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-provider-selection\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368264 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-ocp-branding-template\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368283 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-dir\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368672 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-session\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368730 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368770 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-service-ca\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368787 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v5j8\" (UniqueName: \"kubernetes.io/projected/36375488-d0da-488c-b0ac-1e4f63490cbd-kube-api-access-6v5j8\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.369267 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.368806 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-cliconfig\") pod \"36375488-d0da-488c-b0ac-1e4f63490cbd\" (UID: \"36375488-d0da-488c-b0ac-1e4f63490cbd\") " Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.369742 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.370021 4909 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.370048 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.370060 4909 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36375488-d0da-488c-b0ac-1e4f63490cbd-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.370069 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.370078 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.372971 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.373657 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.374026 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.380901 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36375488-d0da-488c-b0ac-1e4f63490cbd-kube-api-access-6v5j8" (OuterVolumeSpecName: "kube-api-access-6v5j8") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "kube-api-access-6v5j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.389853 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.390066 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.390264 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.390803 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.391118 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "36375488-d0da-488c-b0ac-1e4f63490cbd" (UID: "36375488-d0da-488c-b0ac-1e4f63490cbd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471500 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471554 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471568 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471579 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471607 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471617 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v5j8\" (UniqueName: \"kubernetes.io/projected/36375488-d0da-488c-b0ac-1e4f63490cbd-kube-api-access-6v5j8\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471625 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471634 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.471644 4909 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36375488-d0da-488c-b0ac-1e4f63490cbd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:04 crc kubenswrapper[4909]: I1126 07:04:04.509431 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" path="/var/lib/kubelet/pods/7e88b90d-4bc8-40a1-94bf-ac42f7a78eed/volumes" Nov 26 07:04:05 crc kubenswrapper[4909]: I1126 07:04:05.148019 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" event={"ID":"36375488-d0da-488c-b0ac-1e4f63490cbd","Type":"ContainerDied","Data":"896d8a42c420298a7941737d9bde1187363fa70a0787b4bc7d8393d0525b21e1"} Nov 26 07:04:05 crc kubenswrapper[4909]: I1126 07:04:05.148075 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-68dmw" Nov 26 07:04:05 crc kubenswrapper[4909]: I1126 07:04:05.148082 4909 scope.go:117] "RemoveContainer" containerID="dd6baaa31c0b9557fb5c3890b51ddd7e5d10c7baf570e3033a67219d9ecac23b" Nov 26 07:04:05 crc kubenswrapper[4909]: I1126 07:04:05.177289 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-68dmw"] Nov 26 07:04:05 crc kubenswrapper[4909]: I1126 07:04:05.181952 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-68dmw"] Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.180985 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5ff5db57ff-t69fw"] Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181271 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181295 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181312 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181324 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181341 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="extract-content" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181352 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="extract-content" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181365 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36375488-d0da-488c-b0ac-1e4f63490cbd" containerName="oauth-openshift" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181375 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="36375488-d0da-488c-b0ac-1e4f63490cbd" containerName="oauth-openshift" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181387 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="extract-content" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181396 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="extract-content" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181415 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="extract-content" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181424 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="extract-content" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181442 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="extract-utilities" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181451 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="extract-utilities" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181464 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="extract-utilities" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181473 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="extract-utilities" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181483 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="extract-utilities" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181494 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="extract-utilities" Nov 26 07:04:06 crc kubenswrapper[4909]: E1126 07:04:06.181512 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181522 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181693 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e88b90d-4bc8-40a1-94bf-ac42f7a78eed" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181715 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60330e9-79bc-4851-9235-f8c4ff95ee96" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181735 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="36375488-d0da-488c-b0ac-1e4f63490cbd" containerName="oauth-openshift" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.181752 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="07aa73fa-a53a-4031-89dd-81c5db3e01ea" containerName="registry-server" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.182342 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.195605 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.202384 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5ff5db57ff-t69fw"] Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.203498 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.203616 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.203969 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.204269 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.204679 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.204855 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.205022 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.205243 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.205482 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.205720 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.205744 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.207569 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.220438 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.222717 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.296843 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvj7\" (UniqueName: \"kubernetes.io/projected/20ee1817-4e61-4ff2-b367-58be163c78fc-kube-api-access-xfvj7\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.296937 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-session\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.296992 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297030 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297204 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ee1817-4e61-4ff2-b367-58be163c78fc-audit-dir\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297273 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297303 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297328 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-login\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297358 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297498 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297666 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297702 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297743 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-audit-policies\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.297828 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-error\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.398997 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-session\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399085 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399128 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399204 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ee1817-4e61-4ff2-b367-58be163c78fc-audit-dir\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399237 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399273 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399313 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-login\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399352 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399400 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399475 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399518 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399559 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-audit-policies\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399636 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-error\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.399679 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfvj7\" (UniqueName: \"kubernetes.io/projected/20ee1817-4e61-4ff2-b367-58be163c78fc-kube-api-access-xfvj7\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.402064 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-audit-policies\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.402753 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.403235 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ee1817-4e61-4ff2-b367-58be163c78fc-audit-dir\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.403770 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.407018 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.407066 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.407771 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.407978 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-login\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.408051 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.409288 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.410239 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.412352 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-user-template-error\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.413861 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ee1817-4e61-4ff2-b367-58be163c78fc-v4-0-config-system-session\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.432917 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfvj7\" (UniqueName: \"kubernetes.io/projected/20ee1817-4e61-4ff2-b367-58be163c78fc-kube-api-access-xfvj7\") pod \"oauth-openshift-5ff5db57ff-t69fw\" (UID: \"20ee1817-4e61-4ff2-b367-58be163c78fc\") " pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.509767 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36375488-d0da-488c-b0ac-1e4f63490cbd" path="/var/lib/kubelet/pods/36375488-d0da-488c-b0ac-1e4f63490cbd/volumes" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.551070 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:06 crc kubenswrapper[4909]: I1126 07:04:06.989202 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5ff5db57ff-t69fw"] Nov 26 07:04:07 crc kubenswrapper[4909]: I1126 07:04:07.161527 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" event={"ID":"20ee1817-4e61-4ff2-b367-58be163c78fc","Type":"ContainerStarted","Data":"a03b8adc84e1c9031bdf03b4fa3c4342c816689be9f9dae731816f02203c9c8e"} Nov 26 07:04:07 crc kubenswrapper[4909]: I1126 07:04:07.300350 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:04:07 crc kubenswrapper[4909]: I1126 07:04:07.300408 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:04:07 crc kubenswrapper[4909]: I1126 07:04:07.300457 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:04:07 crc kubenswrapper[4909]: I1126 07:04:07.301157 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:04:07 crc kubenswrapper[4909]: I1126 07:04:07.301239 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb" gracePeriod=600 Nov 26 07:04:08 crc kubenswrapper[4909]: I1126 07:04:08.171049 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb" exitCode=0 Nov 26 07:04:08 crc kubenswrapper[4909]: I1126 07:04:08.171161 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb"} Nov 26 07:04:08 crc kubenswrapper[4909]: I1126 07:04:08.171235 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"d6aa4bfaa92cc741c58fea8e96b8993071a005ef1d633d1dadf1211dbb440e70"} Nov 26 07:04:08 crc kubenswrapper[4909]: I1126 07:04:08.173388 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" event={"ID":"20ee1817-4e61-4ff2-b367-58be163c78fc","Type":"ContainerStarted","Data":"63c760e4ad5a7d73dfd82d5399869650e4b2b0d2c9d3788e3a86240594b23381"} Nov 26 07:04:08 crc kubenswrapper[4909]: I1126 07:04:08.173780 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:08 crc kubenswrapper[4909]: I1126 07:04:08.184697 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" Nov 26 07:04:08 crc kubenswrapper[4909]: I1126 07:04:08.218475 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5ff5db57ff-t69fw" podStartSLOduration=30.218446503 podStartE2EDuration="30.218446503s" podCreationTimestamp="2025-11-26 07:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:04:08.213268005 +0000 UTC m=+220.359479201" watchObservedRunningTime="2025-11-26 07:04:08.218446503 +0000 UTC m=+220.364657709" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.253896 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9454896b9-n45wd"] Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.254408 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" podUID="46c61e04-7aae-4069-8b67-28caf0e4abc5" containerName="controller-manager" containerID="cri-o://c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2" gracePeriod=30 Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.288779 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2"] Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.289006 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" podUID="48851490-e671-4a89-a0ee-ed5a5aeb1813" containerName="route-controller-manager" containerID="cri-o://fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225" gracePeriod=30 Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.736309 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.801780 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-client-ca\") pod \"48851490-e671-4a89-a0ee-ed5a5aeb1813\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.801862 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-config\") pod \"48851490-e671-4a89-a0ee-ed5a5aeb1813\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.801907 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48851490-e671-4a89-a0ee-ed5a5aeb1813-serving-cert\") pod \"48851490-e671-4a89-a0ee-ed5a5aeb1813\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.801967 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6lx\" (UniqueName: \"kubernetes.io/projected/48851490-e671-4a89-a0ee-ed5a5aeb1813-kube-api-access-5n6lx\") pod \"48851490-e671-4a89-a0ee-ed5a5aeb1813\" (UID: \"48851490-e671-4a89-a0ee-ed5a5aeb1813\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.803423 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-client-ca" (OuterVolumeSpecName: "client-ca") pod "48851490-e671-4a89-a0ee-ed5a5aeb1813" (UID: "48851490-e671-4a89-a0ee-ed5a5aeb1813"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.803509 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-config" (OuterVolumeSpecName: "config") pod "48851490-e671-4a89-a0ee-ed5a5aeb1813" (UID: "48851490-e671-4a89-a0ee-ed5a5aeb1813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.808168 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48851490-e671-4a89-a0ee-ed5a5aeb1813-kube-api-access-5n6lx" (OuterVolumeSpecName: "kube-api-access-5n6lx") pod "48851490-e671-4a89-a0ee-ed5a5aeb1813" (UID: "48851490-e671-4a89-a0ee-ed5a5aeb1813"). InnerVolumeSpecName "kube-api-access-5n6lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.808219 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48851490-e671-4a89-a0ee-ed5a5aeb1813-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48851490-e671-4a89-a0ee-ed5a5aeb1813" (UID: "48851490-e671-4a89-a0ee-ed5a5aeb1813"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.823801 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.903466 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-proxy-ca-bundles\") pod \"46c61e04-7aae-4069-8b67-28caf0e4abc5\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.903865 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c61e04-7aae-4069-8b67-28caf0e4abc5-serving-cert\") pod \"46c61e04-7aae-4069-8b67-28caf0e4abc5\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.904049 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-config\") pod \"46c61e04-7aae-4069-8b67-28caf0e4abc5\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.904192 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r649z\" (UniqueName: \"kubernetes.io/projected/46c61e04-7aae-4069-8b67-28caf0e4abc5-kube-api-access-r649z\") pod \"46c61e04-7aae-4069-8b67-28caf0e4abc5\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.904376 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-client-ca\") pod \"46c61e04-7aae-4069-8b67-28caf0e4abc5\" (UID: \"46c61e04-7aae-4069-8b67-28caf0e4abc5\") " Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.904771 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.904872 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48851490-e671-4a89-a0ee-ed5a5aeb1813-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.904958 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6lx\" (UniqueName: \"kubernetes.io/projected/48851490-e671-4a89-a0ee-ed5a5aeb1813-kube-api-access-5n6lx\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.905092 4909 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48851490-e671-4a89-a0ee-ed5a5aeb1813-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.904734 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "46c61e04-7aae-4069-8b67-28caf0e4abc5" (UID: "46c61e04-7aae-4069-8b67-28caf0e4abc5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.905029 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-config" (OuterVolumeSpecName: "config") pod "46c61e04-7aae-4069-8b67-28caf0e4abc5" (UID: "46c61e04-7aae-4069-8b67-28caf0e4abc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.905084 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-client-ca" (OuterVolumeSpecName: "client-ca") pod "46c61e04-7aae-4069-8b67-28caf0e4abc5" (UID: "46c61e04-7aae-4069-8b67-28caf0e4abc5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.906649 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c61e04-7aae-4069-8b67-28caf0e4abc5-kube-api-access-r649z" (OuterVolumeSpecName: "kube-api-access-r649z") pod "46c61e04-7aae-4069-8b67-28caf0e4abc5" (UID: "46c61e04-7aae-4069-8b67-28caf0e4abc5"). InnerVolumeSpecName "kube-api-access-r649z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:13 crc kubenswrapper[4909]: I1126 07:04:13.909338 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c61e04-7aae-4069-8b67-28caf0e4abc5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46c61e04-7aae-4069-8b67-28caf0e4abc5" (UID: "46c61e04-7aae-4069-8b67-28caf0e4abc5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.006799 4909 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.006843 4909 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c61e04-7aae-4069-8b67-28caf0e4abc5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.006856 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.006869 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r649z\" (UniqueName: \"kubernetes.io/projected/46c61e04-7aae-4069-8b67-28caf0e4abc5-kube-api-access-r649z\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.006883 4909 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c61e04-7aae-4069-8b67-28caf0e4abc5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.207396 4909 generic.go:334] "Generic (PLEG): container finished" podID="46c61e04-7aae-4069-8b67-28caf0e4abc5" containerID="c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2" exitCode=0 Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.207465 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" event={"ID":"46c61e04-7aae-4069-8b67-28caf0e4abc5","Type":"ContainerDied","Data":"c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2"} Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.207542 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" event={"ID":"46c61e04-7aae-4069-8b67-28caf0e4abc5","Type":"ContainerDied","Data":"7688e94a5d8faa626386e1afea25d3492ea2091819a0e89d3597dd19ae73c8d8"} Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.207628 4909 scope.go:117] "RemoveContainer" containerID="c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.208506 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9454896b9-n45wd" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.209781 4909 generic.go:334] "Generic (PLEG): container finished" podID="48851490-e671-4a89-a0ee-ed5a5aeb1813" containerID="fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225" exitCode=0 Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.209817 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" event={"ID":"48851490-e671-4a89-a0ee-ed5a5aeb1813","Type":"ContainerDied","Data":"fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225"} Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.209859 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" event={"ID":"48851490-e671-4a89-a0ee-ed5a5aeb1813","Type":"ContainerDied","Data":"e2a6e12b7168c9945cb211525e5880c8e57cec3d8796454986977837a17175ad"} Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.209871 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.242381 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9454896b9-n45wd"] Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.243509 4909 scope.go:117] "RemoveContainer" containerID="c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2" Nov 26 07:04:14 crc kubenswrapper[4909]: E1126 07:04:14.244101 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2\": container with ID starting with c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2 not found: ID does not exist" containerID="c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.244179 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2"} err="failed to get container status \"c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2\": rpc error: code = NotFound desc = could not find container \"c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2\": container with ID starting with c74a959612aba85c391032b6c1bcccc85771dab2b7d2f5dcbcdaeb5b8cb0c6f2 not found: ID does not exist" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.244206 4909 scope.go:117] "RemoveContainer" containerID="fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.251000 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9454896b9-n45wd"] Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.261828 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2"] Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.268125 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5948cb894c-t8hk2"] Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.270653 4909 scope.go:117] "RemoveContainer" containerID="fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225" Nov 26 07:04:14 crc kubenswrapper[4909]: E1126 07:04:14.271255 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225\": container with ID starting with fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225 not found: ID does not exist" containerID="fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.271314 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225"} err="failed to get container status \"fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225\": rpc error: code = NotFound desc = could not find container \"fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225\": container with ID starting with fe146bf63f2ae1422bbbd75e1ae983daaffb6f42c240826535c2f78bbb1fc225 not found: ID does not exist" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.507700 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c61e04-7aae-4069-8b67-28caf0e4abc5" path="/var/lib/kubelet/pods/46c61e04-7aae-4069-8b67-28caf0e4abc5/volumes" Nov 26 07:04:14 crc kubenswrapper[4909]: I1126 07:04:14.508781 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48851490-e671-4a89-a0ee-ed5a5aeb1813" path="/var/lib/kubelet/pods/48851490-e671-4a89-a0ee-ed5a5aeb1813/volumes" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.183713 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp"] Nov 26 07:04:15 crc kubenswrapper[4909]: E1126 07:04:15.184266 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48851490-e671-4a89-a0ee-ed5a5aeb1813" containerName="route-controller-manager" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.184281 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="48851490-e671-4a89-a0ee-ed5a5aeb1813" containerName="route-controller-manager" Nov 26 07:04:15 crc kubenswrapper[4909]: E1126 07:04:15.184309 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c61e04-7aae-4069-8b67-28caf0e4abc5" containerName="controller-manager" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.184317 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c61e04-7aae-4069-8b67-28caf0e4abc5" containerName="controller-manager" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.184424 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="48851490-e671-4a89-a0ee-ed5a5aeb1813" containerName="route-controller-manager" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.184442 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c61e04-7aae-4069-8b67-28caf0e4abc5" containerName="controller-manager" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.184866 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.187539 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.187710 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.187975 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.188159 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.188792 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6"] Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.188865 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.189633 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.192354 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.192392 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.192659 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.192678 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.192743 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.192800 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.192847 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.202082 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.204299 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp"] Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.221510 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6"] Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322643 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-client-ca\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322685 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b156f60c-4fce-4ace-a093-51e84b338e8f-serving-cert\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322702 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f64bf9b-22df-4832-9b20-e1e9a134b97a-serving-cert\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322738 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-proxy-ca-bundles\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322822 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b156f60c-4fce-4ace-a093-51e84b338e8f-config\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322855 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b156f60c-4fce-4ace-a093-51e84b338e8f-client-ca\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322915 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqcc\" (UniqueName: \"kubernetes.io/projected/b156f60c-4fce-4ace-a093-51e84b338e8f-kube-api-access-phqcc\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322939 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mrx\" (UniqueName: \"kubernetes.io/projected/2f64bf9b-22df-4832-9b20-e1e9a134b97a-kube-api-access-n7mrx\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.322963 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-config\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424122 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-client-ca\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424178 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b156f60c-4fce-4ace-a093-51e84b338e8f-serving-cert\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424201 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f64bf9b-22df-4832-9b20-e1e9a134b97a-serving-cert\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424249 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-proxy-ca-bundles\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424286 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b156f60c-4fce-4ace-a093-51e84b338e8f-config\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424310 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b156f60c-4fce-4ace-a093-51e84b338e8f-client-ca\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424350 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqcc\" (UniqueName: \"kubernetes.io/projected/b156f60c-4fce-4ace-a093-51e84b338e8f-kube-api-access-phqcc\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424380 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7mrx\" (UniqueName: \"kubernetes.io/projected/2f64bf9b-22df-4832-9b20-e1e9a134b97a-kube-api-access-n7mrx\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.424408 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-config\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.425471 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b156f60c-4fce-4ace-a093-51e84b338e8f-client-ca\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.425807 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-client-ca\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.425961 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-config\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.426053 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b156f60c-4fce-4ace-a093-51e84b338e8f-config\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.426211 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f64bf9b-22df-4832-9b20-e1e9a134b97a-proxy-ca-bundles\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.430861 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b156f60c-4fce-4ace-a093-51e84b338e8f-serving-cert\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.430892 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f64bf9b-22df-4832-9b20-e1e9a134b97a-serving-cert\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.443493 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7mrx\" (UniqueName: \"kubernetes.io/projected/2f64bf9b-22df-4832-9b20-e1e9a134b97a-kube-api-access-n7mrx\") pod \"controller-manager-7fcbd86ff9-qwqg6\" (UID: \"2f64bf9b-22df-4832-9b20-e1e9a134b97a\") " pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.443500 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqcc\" (UniqueName: \"kubernetes.io/projected/b156f60c-4fce-4ace-a093-51e84b338e8f-kube-api-access-phqcc\") pod \"route-controller-manager-6fb458b8f-8p2kp\" (UID: \"b156f60c-4fce-4ace-a093-51e84b338e8f\") " pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.501188 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.512075 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.724680 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6"] Nov 26 07:04:15 crc kubenswrapper[4909]: I1126 07:04:15.783384 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp"] Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.225281 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" event={"ID":"2f64bf9b-22df-4832-9b20-e1e9a134b97a","Type":"ContainerStarted","Data":"34b4eddca6e60221be84fad8c7b20829ed6424d462df0fe5a2e84c73642e79b8"} Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.225682 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" event={"ID":"2f64bf9b-22df-4832-9b20-e1e9a134b97a","Type":"ContainerStarted","Data":"473855cbda3ee0057e374ec392184d5278649ce687c5d4f6d4754aaf890650e4"} Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.225713 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.228147 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" event={"ID":"b156f60c-4fce-4ace-a093-51e84b338e8f","Type":"ContainerStarted","Data":"c6094c158a7243a01226aae69e5a3c4fff3ddbff0ae32ef8f810d53a9fed48ff"} Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.228197 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.228214 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" event={"ID":"b156f60c-4fce-4ace-a093-51e84b338e8f","Type":"ContainerStarted","Data":"dac531a9bf44b6e1715071d06a0fe7a6d505a4963548fc283e814d746c77442b"} Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.234252 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.251232 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fcbd86ff9-qwqg6" podStartSLOduration=3.251208107 podStartE2EDuration="3.251208107s" podCreationTimestamp="2025-11-26 07:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:04:16.245108925 +0000 UTC m=+228.391320091" watchObservedRunningTime="2025-11-26 07:04:16.251208107 +0000 UTC m=+228.397419273" Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.291154 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" podStartSLOduration=3.291133996 podStartE2EDuration="3.291133996s" podCreationTimestamp="2025-11-26 07:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:04:16.286991006 +0000 UTC m=+228.433202172" watchObservedRunningTime="2025-11-26 07:04:16.291133996 +0000 UTC m=+228.437345172" Nov 26 07:04:16 crc kubenswrapper[4909]: I1126 07:04:16.344144 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb458b8f-8p2kp" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.263317 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bl49"] Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.264550 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bl49" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="registry-server" containerID="cri-o://a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532" gracePeriod=30 Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.276302 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kslpd"] Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.276555 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kslpd" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="registry-server" containerID="cri-o://5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0" gracePeriod=30 Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.288936 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6sfv"] Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.289274 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" podUID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" containerName="marketplace-operator" containerID="cri-o://3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497" gracePeriod=30 Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.293704 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k62sn"] Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.293931 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k62sn" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="registry-server" containerID="cri-o://3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019" gracePeriod=30 Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.307178 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5k95k"] Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.307563 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5k95k" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="registry-server" containerID="cri-o://948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c" gracePeriod=30 Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.314749 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s7vvj"] Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.315817 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.327960 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s7vvj"] Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.368699 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.368755 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpm8q\" (UniqueName: \"kubernetes.io/projected/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-kube-api-access-gpm8q\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.368883 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.469917 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.469964 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpm8q\" (UniqueName: \"kubernetes.io/projected/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-kube-api-access-gpm8q\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.469989 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.471547 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.475244 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.485885 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpm8q\" (UniqueName: \"kubernetes.io/projected/59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4-kube-api-access-gpm8q\") pod \"marketplace-operator-79b997595-s7vvj\" (UID: \"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.642917 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.811241 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.877683 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-utilities\") pod \"e602dd02-2a76-453b-932d-3f670998c035\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.877754 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n2l9\" (UniqueName: \"kubernetes.io/projected/e602dd02-2a76-453b-932d-3f670998c035-kube-api-access-7n2l9\") pod \"e602dd02-2a76-453b-932d-3f670998c035\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.877797 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-catalog-content\") pod \"e602dd02-2a76-453b-932d-3f670998c035\" (UID: \"e602dd02-2a76-453b-932d-3f670998c035\") " Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.878880 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-utilities" (OuterVolumeSpecName: "utilities") pod "e602dd02-2a76-453b-932d-3f670998c035" (UID: "e602dd02-2a76-453b-932d-3f670998c035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.888548 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e602dd02-2a76-453b-932d-3f670998c035-kube-api-access-7n2l9" (OuterVolumeSpecName: "kube-api-access-7n2l9") pod "e602dd02-2a76-453b-932d-3f670998c035" (UID: "e602dd02-2a76-453b-932d-3f670998c035"). InnerVolumeSpecName "kube-api-access-7n2l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.937513 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e602dd02-2a76-453b-932d-3f670998c035" (UID: "e602dd02-2a76-453b-932d-3f670998c035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.979511 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.979548 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n2l9\" (UniqueName: \"kubernetes.io/projected/e602dd02-2a76-453b-932d-3f670998c035-kube-api-access-7n2l9\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:26 crc kubenswrapper[4909]: I1126 07:04:26.979562 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e602dd02-2a76-453b-932d-3f670998c035-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.039208 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.045207 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.068663 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.087453 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.180814 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqkzm\" (UniqueName: \"kubernetes.io/projected/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-kube-api-access-kqkzm\") pod \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.180905 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-utilities\") pod \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.180928 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-utilities\") pod \"aabdf0c7-5fdc-4103-beab-05890462e3e2\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.180960 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpm6r\" (UniqueName: \"kubernetes.io/projected/595bc076-964b-4cf0-a307-688b3458164c-kube-api-access-kpm6r\") pod \"595bc076-964b-4cf0-a307-688b3458164c\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.180996 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdcjf\" (UniqueName: \"kubernetes.io/projected/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-kube-api-access-tdcjf\") pod \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.181023 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlsv9\" (UniqueName: \"kubernetes.io/projected/aabdf0c7-5fdc-4103-beab-05890462e3e2-kube-api-access-qlsv9\") pod \"aabdf0c7-5fdc-4103-beab-05890462e3e2\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.181064 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-catalog-content\") pod \"595bc076-964b-4cf0-a307-688b3458164c\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.181085 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-catalog-content\") pod \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\" (UID: \"28f2292b-c3f2-4e25-ad1d-45a4b78bebff\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.181108 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-operator-metrics\") pod \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.181138 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-catalog-content\") pod \"aabdf0c7-5fdc-4103-beab-05890462e3e2\" (UID: \"aabdf0c7-5fdc-4103-beab-05890462e3e2\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.181175 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-utilities\") pod \"595bc076-964b-4cf0-a307-688b3458164c\" (UID: \"595bc076-964b-4cf0-a307-688b3458164c\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.181198 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-trusted-ca\") pod \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\" (UID: \"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f\") " Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.182010 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" (UID: "9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.184373 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabdf0c7-5fdc-4103-beab-05890462e3e2-kube-api-access-qlsv9" (OuterVolumeSpecName: "kube-api-access-qlsv9") pod "aabdf0c7-5fdc-4103-beab-05890462e3e2" (UID: "aabdf0c7-5fdc-4103-beab-05890462e3e2"). InnerVolumeSpecName "kube-api-access-qlsv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.187740 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-utilities" (OuterVolumeSpecName: "utilities") pod "aabdf0c7-5fdc-4103-beab-05890462e3e2" (UID: "aabdf0c7-5fdc-4103-beab-05890462e3e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.187888 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-utilities" (OuterVolumeSpecName: "utilities") pod "28f2292b-c3f2-4e25-ad1d-45a4b78bebff" (UID: "28f2292b-c3f2-4e25-ad1d-45a4b78bebff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.188476 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-utilities" (OuterVolumeSpecName: "utilities") pod "595bc076-964b-4cf0-a307-688b3458164c" (UID: "595bc076-964b-4cf0-a307-688b3458164c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.188575 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-kube-api-access-kqkzm" (OuterVolumeSpecName: "kube-api-access-kqkzm") pod "9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" (UID: "9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f"). InnerVolumeSpecName "kube-api-access-kqkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.189771 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-kube-api-access-tdcjf" (OuterVolumeSpecName: "kube-api-access-tdcjf") pod "28f2292b-c3f2-4e25-ad1d-45a4b78bebff" (UID: "28f2292b-c3f2-4e25-ad1d-45a4b78bebff"). InnerVolumeSpecName "kube-api-access-tdcjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.190739 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595bc076-964b-4cf0-a307-688b3458164c-kube-api-access-kpm6r" (OuterVolumeSpecName: "kube-api-access-kpm6r") pod "595bc076-964b-4cf0-a307-688b3458164c" (UID: "595bc076-964b-4cf0-a307-688b3458164c"). InnerVolumeSpecName "kube-api-access-kpm6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.193919 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" (UID: "9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.208784 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aabdf0c7-5fdc-4103-beab-05890462e3e2" (UID: "aabdf0c7-5fdc-4103-beab-05890462e3e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.236382 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "595bc076-964b-4cf0-a307-688b3458164c" (UID: "595bc076-964b-4cf0-a307-688b3458164c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.250419 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s7vvj"] Nov 26 07:04:27 crc kubenswrapper[4909]: W1126 07:04:27.256551 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59fc50dc_e77e_4c40_b29a_c9d8f48ac4d4.slice/crio-9320f3598061f955ceb0a5168290d86343bd89490542edccfe3d5f838bfca07c WatchSource:0}: Error finding container 9320f3598061f955ceb0a5168290d86343bd89490542edccfe3d5f838bfca07c: Status 404 returned error can't find the container with id 9320f3598061f955ceb0a5168290d86343bd89490542edccfe3d5f838bfca07c Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283609 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283854 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283869 4909 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283881 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqkzm\" (UniqueName: \"kubernetes.io/projected/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-kube-api-access-kqkzm\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283893 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283905 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabdf0c7-5fdc-4103-beab-05890462e3e2-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283941 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpm6r\" (UniqueName: \"kubernetes.io/projected/595bc076-964b-4cf0-a307-688b3458164c-kube-api-access-kpm6r\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283952 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdcjf\" (UniqueName: \"kubernetes.io/projected/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-kube-api-access-tdcjf\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283962 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlsv9\" (UniqueName: \"kubernetes.io/projected/aabdf0c7-5fdc-4103-beab-05890462e3e2-kube-api-access-qlsv9\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283972 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/595bc076-964b-4cf0-a307-688b3458164c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.283982 4909 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.304376 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28f2292b-c3f2-4e25-ad1d-45a4b78bebff" (UID: "28f2292b-c3f2-4e25-ad1d-45a4b78bebff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.308682 4909 generic.go:334] "Generic (PLEG): container finished" podID="e602dd02-2a76-453b-932d-3f670998c035" containerID="a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532" exitCode=0 Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.308750 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bl49" event={"ID":"e602dd02-2a76-453b-932d-3f670998c035","Type":"ContainerDied","Data":"a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.308934 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bl49" event={"ID":"e602dd02-2a76-453b-932d-3f670998c035","Type":"ContainerDied","Data":"51234d0274bae9d969013a9143f2137284debf49c59d37d97dc6ee291605f3e3"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.308782 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bl49" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.308978 4909 scope.go:117] "RemoveContainer" containerID="a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.312321 4909 generic.go:334] "Generic (PLEG): container finished" podID="595bc076-964b-4cf0-a307-688b3458164c" containerID="5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0" exitCode=0 Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.312381 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kslpd" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.312391 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kslpd" event={"ID":"595bc076-964b-4cf0-a307-688b3458164c","Type":"ContainerDied","Data":"5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.312458 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kslpd" event={"ID":"595bc076-964b-4cf0-a307-688b3458164c","Type":"ContainerDied","Data":"78f6cdaeeffec025aec8ebe630db85fbb45006b20f9852bf4e4cbb4a42400868"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.315087 4909 generic.go:334] "Generic (PLEG): container finished" podID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerID="3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019" exitCode=0 Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.315168 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k62sn" event={"ID":"aabdf0c7-5fdc-4103-beab-05890462e3e2","Type":"ContainerDied","Data":"3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.315200 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k62sn" event={"ID":"aabdf0c7-5fdc-4103-beab-05890462e3e2","Type":"ContainerDied","Data":"defda3376eef9d1868d2553030255cf28fb7d7e3b23bd502862a571eb1236f2e"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.315268 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k62sn" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.329939 4909 generic.go:334] "Generic (PLEG): container finished" podID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerID="948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c" exitCode=0 Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.330133 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k95k" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.330532 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k95k" event={"ID":"28f2292b-c3f2-4e25-ad1d-45a4b78bebff","Type":"ContainerDied","Data":"948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.330654 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k95k" event={"ID":"28f2292b-c3f2-4e25-ad1d-45a4b78bebff","Type":"ContainerDied","Data":"8b8b5c4545913445de6e90fea5ea948756aefb662ba044026d5b3b8c4dbe80ed"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.332690 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" event={"ID":"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4","Type":"ContainerStarted","Data":"9320f3598061f955ceb0a5168290d86343bd89490542edccfe3d5f838bfca07c"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.336315 4909 generic.go:334] "Generic (PLEG): container finished" podID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" containerID="3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497" exitCode=0 Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.336361 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" event={"ID":"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f","Type":"ContainerDied","Data":"3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.336426 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" event={"ID":"9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f","Type":"ContainerDied","Data":"b0288266e8f6783bdeadc7b7e22d0aff2ea97479a9eb618c427cb582d8ecd210"} Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.336377 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6sfv" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.356860 4909 scope.go:117] "RemoveContainer" containerID="bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.361983 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k62sn"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.366693 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k62sn"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.370859 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kslpd"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.378402 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kslpd"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.385003 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28f2292b-c3f2-4e25-ad1d-45a4b78bebff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.393500 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5k95k"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.399691 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5k95k"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.404227 4909 scope.go:117] "RemoveContainer" containerID="34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.405077 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6sfv"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.407436 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6sfv"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.418868 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bl49"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.422304 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bl49"] Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.454704 4909 scope.go:117] "RemoveContainer" containerID="a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.455182 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532\": container with ID starting with a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532 not found: ID does not exist" containerID="a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.455237 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532"} err="failed to get container status \"a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532\": rpc error: code = NotFound desc = could not find container \"a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532\": container with ID starting with a99d3262edfc2d48cfb45010abe2ac1fc5232defe90b14e4ef0ce258538be532 not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.455266 4909 scope.go:117] "RemoveContainer" containerID="bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.455531 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f\": container with ID starting with bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f not found: ID does not exist" containerID="bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.455585 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f"} err="failed to get container status \"bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f\": rpc error: code = NotFound desc = could not find container \"bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f\": container with ID starting with bb8e8cd9f530dcba762c814f51f834363194893a28d1e84f5f02840dc0d5e30f not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.455690 4909 scope.go:117] "RemoveContainer" containerID="34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.455920 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861\": container with ID starting with 34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861 not found: ID does not exist" containerID="34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.455948 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861"} err="failed to get container status \"34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861\": rpc error: code = NotFound desc = could not find container \"34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861\": container with ID starting with 34c29a40af5e15d21b00e736720a8586370c5a2edc416776b35d47d19d740861 not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.455963 4909 scope.go:117] "RemoveContainer" containerID="5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.472688 4909 scope.go:117] "RemoveContainer" containerID="f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.487807 4909 scope.go:117] "RemoveContainer" containerID="06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.505190 4909 scope.go:117] "RemoveContainer" containerID="5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.505671 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0\": container with ID starting with 5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0 not found: ID does not exist" containerID="5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.505722 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0"} err="failed to get container status \"5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0\": rpc error: code = NotFound desc = could not find container \"5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0\": container with ID starting with 5b109f157a40ad52d89b17ede5100d001acd6c66e5bff37ab392a4b0c7d0aed0 not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.505754 4909 scope.go:117] "RemoveContainer" containerID="f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.506299 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff\": container with ID starting with f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff not found: ID does not exist" containerID="f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.506343 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff"} err="failed to get container status \"f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff\": rpc error: code = NotFound desc = could not find container \"f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff\": container with ID starting with f38a660f4fe2e5d67b31ca4713098d3ca2827fad311f0a331f24aa3092234cff not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.506371 4909 scope.go:117] "RemoveContainer" containerID="06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.506802 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7\": container with ID starting with 06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7 not found: ID does not exist" containerID="06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.506843 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7"} err="failed to get container status \"06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7\": rpc error: code = NotFound desc = could not find container \"06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7\": container with ID starting with 06ecff6b545e23bf76814c7c4dc3648ff2275ce47a12b9687a9f72cca4ca70d7 not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.506872 4909 scope.go:117] "RemoveContainer" containerID="3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.521474 4909 scope.go:117] "RemoveContainer" containerID="707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.535846 4909 scope.go:117] "RemoveContainer" containerID="53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.551652 4909 scope.go:117] "RemoveContainer" containerID="3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.552112 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019\": container with ID starting with 3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019 not found: ID does not exist" containerID="3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.552163 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019"} err="failed to get container status \"3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019\": rpc error: code = NotFound desc = could not find container \"3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019\": container with ID starting with 3e33b5581a886f4788615f09cc20227f5e9591facc80c158e80f468845d14019 not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.552191 4909 scope.go:117] "RemoveContainer" containerID="707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.552611 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f\": container with ID starting with 707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f not found: ID does not exist" containerID="707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.552644 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f"} err="failed to get container status \"707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f\": rpc error: code = NotFound desc = could not find container \"707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f\": container with ID starting with 707b7a8a4045728367da6bd3cb5b94752e201f7861290e805190b25a4eca0d3f not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.552668 4909 scope.go:117] "RemoveContainer" containerID="53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.553002 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c\": container with ID starting with 53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c not found: ID does not exist" containerID="53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.553038 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c"} err="failed to get container status \"53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c\": rpc error: code = NotFound desc = could not find container \"53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c\": container with ID starting with 53182805f0893ed9391924d2bac41b79c986caabcf36e169ff5ff8d6c892790c not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.553055 4909 scope.go:117] "RemoveContainer" containerID="948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.566187 4909 scope.go:117] "RemoveContainer" containerID="cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.579943 4909 scope.go:117] "RemoveContainer" containerID="b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.593681 4909 scope.go:117] "RemoveContainer" containerID="948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.594191 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c\": container with ID starting with 948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c not found: ID does not exist" containerID="948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.594234 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c"} err="failed to get container status \"948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c\": rpc error: code = NotFound desc = could not find container \"948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c\": container with ID starting with 948575b66cbc920c44653619a2ec4e529bf8826eb04070f9fc2a1e29b8eeda5c not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.594260 4909 scope.go:117] "RemoveContainer" containerID="cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.594685 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b\": container with ID starting with cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b not found: ID does not exist" containerID="cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.594711 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b"} err="failed to get container status \"cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b\": rpc error: code = NotFound desc = could not find container \"cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b\": container with ID starting with cf050da0e4ec107834fa421dbc87edb4af711c6fd553dd5f709521529294577b not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.594730 4909 scope.go:117] "RemoveContainer" containerID="b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.595061 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08\": container with ID starting with b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08 not found: ID does not exist" containerID="b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.595082 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08"} err="failed to get container status \"b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08\": rpc error: code = NotFound desc = could not find container \"b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08\": container with ID starting with b01522f11998987a60f5077a4f9e2e97d86652de2ae4d509df380713cfd26b08 not found: ID does not exist" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.595094 4909 scope.go:117] "RemoveContainer" containerID="3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.606822 4909 scope.go:117] "RemoveContainer" containerID="3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497" Nov 26 07:04:27 crc kubenswrapper[4909]: E1126 07:04:27.607299 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497\": container with ID starting with 3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497 not found: ID does not exist" containerID="3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497" Nov 26 07:04:27 crc kubenswrapper[4909]: I1126 07:04:27.607351 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497"} err="failed to get container status \"3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497\": rpc error: code = NotFound desc = could not find container \"3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497\": container with ID starting with 3f06d33ea92e6c03a90f417a646b69b9192f929e334612feb63afb59e0a54497 not found: ID does not exist" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.348945 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" event={"ID":"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4","Type":"ContainerStarted","Data":"536faeed70c9a05a03076564921bf46c5aa9037f3fb42ec5c946ca42f55e2412"} Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.350029 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.355405 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.372585 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" podStartSLOduration=2.37256808 podStartE2EDuration="2.37256808s" podCreationTimestamp="2025-11-26 07:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:04:28.370853804 +0000 UTC m=+240.517064970" watchObservedRunningTime="2025-11-26 07:04:28.37256808 +0000 UTC m=+240.518779246" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484405 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dnb2v"] Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484706 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484732 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484749 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484760 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484778 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484791 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484802 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484814 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484828 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484840 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484855 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484866 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484882 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" containerName="marketplace-operator" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484893 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" containerName="marketplace-operator" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484909 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484920 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484936 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484946 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484963 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.484973 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.484990 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485003 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="extract-utilities" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.485020 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485030 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: E1126 07:04:28.485048 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485058 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="extract-content" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485263 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e602dd02-2a76-453b-932d-3f670998c035" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485284 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="595bc076-964b-4cf0-a307-688b3458164c" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485306 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485319 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" containerName="marketplace-operator" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.485335 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" containerName="registry-server" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.486533 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.492056 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.495430 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnb2v"] Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.508873 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f2292b-c3f2-4e25-ad1d-45a4b78bebff" path="/var/lib/kubelet/pods/28f2292b-c3f2-4e25-ad1d-45a4b78bebff/volumes" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.509623 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595bc076-964b-4cf0-a307-688b3458164c" path="/var/lib/kubelet/pods/595bc076-964b-4cf0-a307-688b3458164c/volumes" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.510271 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f" path="/var/lib/kubelet/pods/9d313d7a-dcd3-46fa-b7e7-d5a20dd0161f/volumes" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.511349 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabdf0c7-5fdc-4103-beab-05890462e3e2" path="/var/lib/kubelet/pods/aabdf0c7-5fdc-4103-beab-05890462e3e2/volumes" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.512059 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e602dd02-2a76-453b-932d-3f670998c035" path="/var/lib/kubelet/pods/e602dd02-2a76-453b-932d-3f670998c035/volumes" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.599126 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662bf7ae-d0e1-462d-9e20-b74af9087f01-catalog-content\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.599206 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662bf7ae-d0e1-462d-9e20-b74af9087f01-utilities\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.599338 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whg4\" (UniqueName: \"kubernetes.io/projected/662bf7ae-d0e1-462d-9e20-b74af9087f01-kube-api-access-5whg4\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.681321 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lfn6t"] Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.683883 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.689506 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.697763 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfn6t"] Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.700344 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662bf7ae-d0e1-462d-9e20-b74af9087f01-utilities\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.700393 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whg4\" (UniqueName: \"kubernetes.io/projected/662bf7ae-d0e1-462d-9e20-b74af9087f01-kube-api-access-5whg4\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.700451 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662bf7ae-d0e1-462d-9e20-b74af9087f01-catalog-content\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.700847 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662bf7ae-d0e1-462d-9e20-b74af9087f01-catalog-content\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.700854 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662bf7ae-d0e1-462d-9e20-b74af9087f01-utilities\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.720566 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whg4\" (UniqueName: \"kubernetes.io/projected/662bf7ae-d0e1-462d-9e20-b74af9087f01-kube-api-access-5whg4\") pod \"redhat-marketplace-dnb2v\" (UID: \"662bf7ae-d0e1-462d-9e20-b74af9087f01\") " pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.801293 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1a9073-ad63-442c-b428-49b47ab69a83-utilities\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.801342 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1a9073-ad63-442c-b428-49b47ab69a83-catalog-content\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.801371 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfq4\" (UniqueName: \"kubernetes.io/projected/9d1a9073-ad63-442c-b428-49b47ab69a83-kube-api-access-kzfq4\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.813120 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.902468 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfq4\" (UniqueName: \"kubernetes.io/projected/9d1a9073-ad63-442c-b428-49b47ab69a83-kube-api-access-kzfq4\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.902801 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1a9073-ad63-442c-b428-49b47ab69a83-utilities\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.902822 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1a9073-ad63-442c-b428-49b47ab69a83-catalog-content\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.903213 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1a9073-ad63-442c-b428-49b47ab69a83-catalog-content\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.903248 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1a9073-ad63-442c-b428-49b47ab69a83-utilities\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:28 crc kubenswrapper[4909]: I1126 07:04:28.933751 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfq4\" (UniqueName: \"kubernetes.io/projected/9d1a9073-ad63-442c-b428-49b47ab69a83-kube-api-access-kzfq4\") pod \"certified-operators-lfn6t\" (UID: \"9d1a9073-ad63-442c-b428-49b47ab69a83\") " pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:29 crc kubenswrapper[4909]: I1126 07:04:29.003170 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:29 crc kubenswrapper[4909]: I1126 07:04:29.204815 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnb2v"] Nov 26 07:04:29 crc kubenswrapper[4909]: W1126 07:04:29.207870 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod662bf7ae_d0e1_462d_9e20_b74af9087f01.slice/crio-4686f5dfdc2ba63eea3b4cc81b34226b0e9eeaae635cd694c8894159d41d5cc9 WatchSource:0}: Error finding container 4686f5dfdc2ba63eea3b4cc81b34226b0e9eeaae635cd694c8894159d41d5cc9: Status 404 returned error can't find the container with id 4686f5dfdc2ba63eea3b4cc81b34226b0e9eeaae635cd694c8894159d41d5cc9 Nov 26 07:04:29 crc kubenswrapper[4909]: I1126 07:04:29.360853 4909 generic.go:334] "Generic (PLEG): container finished" podID="662bf7ae-d0e1-462d-9e20-b74af9087f01" containerID="453a7ec717281b687f66b150930cbbb6ddd20f2cdc409e63221898db6fd8f58a" exitCode=0 Nov 26 07:04:29 crc kubenswrapper[4909]: I1126 07:04:29.361047 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnb2v" event={"ID":"662bf7ae-d0e1-462d-9e20-b74af9087f01","Type":"ContainerDied","Data":"453a7ec717281b687f66b150930cbbb6ddd20f2cdc409e63221898db6fd8f58a"} Nov 26 07:04:29 crc kubenswrapper[4909]: I1126 07:04:29.361101 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnb2v" event={"ID":"662bf7ae-d0e1-462d-9e20-b74af9087f01","Type":"ContainerStarted","Data":"4686f5dfdc2ba63eea3b4cc81b34226b0e9eeaae635cd694c8894159d41d5cc9"} Nov 26 07:04:29 crc kubenswrapper[4909]: I1126 07:04:29.404831 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfn6t"] Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.370571 4909 generic.go:334] "Generic (PLEG): container finished" podID="9d1a9073-ad63-442c-b428-49b47ab69a83" containerID="7fbfa78cfb0f999810f8990670312f74ff2b8e21e706159f3d543dbba8ef7e82" exitCode=0 Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.370668 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfn6t" event={"ID":"9d1a9073-ad63-442c-b428-49b47ab69a83","Type":"ContainerDied","Data":"7fbfa78cfb0f999810f8990670312f74ff2b8e21e706159f3d543dbba8ef7e82"} Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.371185 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfn6t" event={"ID":"9d1a9073-ad63-442c-b428-49b47ab69a83","Type":"ContainerStarted","Data":"d0a7e59af7cc30e09a33ceb14143b41fd0214e8215b7f8992408a4a0355b40cd"} Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.374823 4909 generic.go:334] "Generic (PLEG): container finished" podID="662bf7ae-d0e1-462d-9e20-b74af9087f01" containerID="8648dbd7a20046b9ed4b3a451a0a318294ab6886486561b463c1aca535973cff" exitCode=0 Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.374921 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnb2v" event={"ID":"662bf7ae-d0e1-462d-9e20-b74af9087f01","Type":"ContainerDied","Data":"8648dbd7a20046b9ed4b3a451a0a318294ab6886486561b463c1aca535973cff"} Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.883663 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtq92"] Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.884540 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.886400 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.895158 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtq92"] Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.923662 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ffc057a-aedf-4a50-a7a4-ae7360212301-catalog-content\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.924018 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvwf\" (UniqueName: \"kubernetes.io/projected/7ffc057a-aedf-4a50-a7a4-ae7360212301-kube-api-access-plvwf\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:30 crc kubenswrapper[4909]: I1126 07:04:30.924049 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ffc057a-aedf-4a50-a7a4-ae7360212301-utilities\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.025294 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ffc057a-aedf-4a50-a7a4-ae7360212301-catalog-content\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.025425 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvwf\" (UniqueName: \"kubernetes.io/projected/7ffc057a-aedf-4a50-a7a4-ae7360212301-kube-api-access-plvwf\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.025971 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ffc057a-aedf-4a50-a7a4-ae7360212301-catalog-content\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.026112 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ffc057a-aedf-4a50-a7a4-ae7360212301-utilities\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.025523 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ffc057a-aedf-4a50-a7a4-ae7360212301-utilities\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.045611 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvwf\" (UniqueName: \"kubernetes.io/projected/7ffc057a-aedf-4a50-a7a4-ae7360212301-kube-api-access-plvwf\") pod \"community-operators-gtq92\" (UID: \"7ffc057a-aedf-4a50-a7a4-ae7360212301\") " pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.076532 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7sxgl"] Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.077770 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.086811 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.088562 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7sxgl"] Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.127484 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/105cf0ca-2270-45cc-b9ba-0e1cad52d688-utilities\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.127551 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/105cf0ca-2270-45cc-b9ba-0e1cad52d688-catalog-content\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.127623 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hchzh\" (UniqueName: \"kubernetes.io/projected/105cf0ca-2270-45cc-b9ba-0e1cad52d688-kube-api-access-hchzh\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.207626 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.229239 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/105cf0ca-2270-45cc-b9ba-0e1cad52d688-catalog-content\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.229311 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hchzh\" (UniqueName: \"kubernetes.io/projected/105cf0ca-2270-45cc-b9ba-0e1cad52d688-kube-api-access-hchzh\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.229365 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/105cf0ca-2270-45cc-b9ba-0e1cad52d688-utilities\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.230356 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/105cf0ca-2270-45cc-b9ba-0e1cad52d688-utilities\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.230538 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/105cf0ca-2270-45cc-b9ba-0e1cad52d688-catalog-content\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.258462 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hchzh\" (UniqueName: \"kubernetes.io/projected/105cf0ca-2270-45cc-b9ba-0e1cad52d688-kube-api-access-hchzh\") pod \"redhat-operators-7sxgl\" (UID: \"105cf0ca-2270-45cc-b9ba-0e1cad52d688\") " pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.380998 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfn6t" event={"ID":"9d1a9073-ad63-442c-b428-49b47ab69a83","Type":"ContainerStarted","Data":"056a659a567fe945d36c435ec3218a8d7d79b1a09bb9c4e44e7379393d5d46fb"} Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.383194 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnb2v" event={"ID":"662bf7ae-d0e1-462d-9e20-b74af9087f01","Type":"ContainerStarted","Data":"def3395daa79f06206acc00d8e4c6b9ebfee65c711374ac1d80beaa60e6877c4"} Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.428198 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.431469 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dnb2v" podStartSLOduration=2.02980853 podStartE2EDuration="3.431447982s" podCreationTimestamp="2025-11-26 07:04:28 +0000 UTC" firstStartedPulling="2025-11-26 07:04:29.363034313 +0000 UTC m=+241.509245479" lastFinishedPulling="2025-11-26 07:04:30.764673755 +0000 UTC m=+242.910884931" observedRunningTime="2025-11-26 07:04:31.428356249 +0000 UTC m=+243.574567415" watchObservedRunningTime="2025-11-26 07:04:31.431447982 +0000 UTC m=+243.577659148" Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.596537 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtq92"] Nov 26 07:04:31 crc kubenswrapper[4909]: W1126 07:04:31.603898 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ffc057a_aedf_4a50_a7a4_ae7360212301.slice/crio-be5173e529b694f01ad44917c7bf47dbd1134179de58c6c1f7f99aca9fc410ae WatchSource:0}: Error finding container be5173e529b694f01ad44917c7bf47dbd1134179de58c6c1f7f99aca9fc410ae: Status 404 returned error can't find the container with id be5173e529b694f01ad44917c7bf47dbd1134179de58c6c1f7f99aca9fc410ae Nov 26 07:04:31 crc kubenswrapper[4909]: I1126 07:04:31.832294 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7sxgl"] Nov 26 07:04:31 crc kubenswrapper[4909]: W1126 07:04:31.832785 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod105cf0ca_2270_45cc_b9ba_0e1cad52d688.slice/crio-6cdf402122f8d7dfa5a07950c2f438a96798eb996617f74d2837a570a6606e41 WatchSource:0}: Error finding container 6cdf402122f8d7dfa5a07950c2f438a96798eb996617f74d2837a570a6606e41: Status 404 returned error can't find the container with id 6cdf402122f8d7dfa5a07950c2f438a96798eb996617f74d2837a570a6606e41 Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.392122 4909 generic.go:334] "Generic (PLEG): container finished" podID="9d1a9073-ad63-442c-b428-49b47ab69a83" containerID="056a659a567fe945d36c435ec3218a8d7d79b1a09bb9c4e44e7379393d5d46fb" exitCode=0 Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.392204 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfn6t" event={"ID":"9d1a9073-ad63-442c-b428-49b47ab69a83","Type":"ContainerDied","Data":"056a659a567fe945d36c435ec3218a8d7d79b1a09bb9c4e44e7379393d5d46fb"} Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.393979 4909 generic.go:334] "Generic (PLEG): container finished" podID="7ffc057a-aedf-4a50-a7a4-ae7360212301" containerID="c973bae9d14f1197d9b1190db7a4e29999a4e4328423eb45091e79ef6aa00fe7" exitCode=0 Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.394053 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq92" event={"ID":"7ffc057a-aedf-4a50-a7a4-ae7360212301","Type":"ContainerDied","Data":"c973bae9d14f1197d9b1190db7a4e29999a4e4328423eb45091e79ef6aa00fe7"} Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.394070 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq92" event={"ID":"7ffc057a-aedf-4a50-a7a4-ae7360212301","Type":"ContainerStarted","Data":"be5173e529b694f01ad44917c7bf47dbd1134179de58c6c1f7f99aca9fc410ae"} Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.403224 4909 generic.go:334] "Generic (PLEG): container finished" podID="105cf0ca-2270-45cc-b9ba-0e1cad52d688" containerID="265bfe27f711aab517ee862a82983c2274ea7c07b974446601aac335a2361aa2" exitCode=0 Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.403282 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sxgl" event={"ID":"105cf0ca-2270-45cc-b9ba-0e1cad52d688","Type":"ContainerDied","Data":"265bfe27f711aab517ee862a82983c2274ea7c07b974446601aac335a2361aa2"} Nov 26 07:04:32 crc kubenswrapper[4909]: I1126 07:04:32.403341 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sxgl" event={"ID":"105cf0ca-2270-45cc-b9ba-0e1cad52d688","Type":"ContainerStarted","Data":"6cdf402122f8d7dfa5a07950c2f438a96798eb996617f74d2837a570a6606e41"} Nov 26 07:04:34 crc kubenswrapper[4909]: I1126 07:04:34.415203 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfn6t" event={"ID":"9d1a9073-ad63-442c-b428-49b47ab69a83","Type":"ContainerStarted","Data":"9e75229cbad0c0fd6971ad507ac09b73b4b035a8376a7bd1bc91a7e329fe9f18"} Nov 26 07:04:34 crc kubenswrapper[4909]: I1126 07:04:34.417659 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq92" event={"ID":"7ffc057a-aedf-4a50-a7a4-ae7360212301","Type":"ContainerStarted","Data":"2d7e92ebd51973c670187110ca04d0f32591dd09084fab86df3b6e403141e7e6"} Nov 26 07:04:34 crc kubenswrapper[4909]: I1126 07:04:34.419805 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sxgl" event={"ID":"105cf0ca-2270-45cc-b9ba-0e1cad52d688","Type":"ContainerStarted","Data":"35df5eddfc0317288c1bbdb0b3c0641be390e00e17a37fd7a4a202c396487be2"} Nov 26 07:04:34 crc kubenswrapper[4909]: I1126 07:04:34.440528 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lfn6t" podStartSLOduration=3.397332558 podStartE2EDuration="6.440510713s" podCreationTimestamp="2025-11-26 07:04:28 +0000 UTC" firstStartedPulling="2025-11-26 07:04:30.373502758 +0000 UTC m=+242.519713924" lastFinishedPulling="2025-11-26 07:04:33.416680913 +0000 UTC m=+245.562892079" observedRunningTime="2025-11-26 07:04:34.440351029 +0000 UTC m=+246.586562195" watchObservedRunningTime="2025-11-26 07:04:34.440510713 +0000 UTC m=+246.586721879" Nov 26 07:04:35 crc kubenswrapper[4909]: I1126 07:04:35.431043 4909 generic.go:334] "Generic (PLEG): container finished" podID="7ffc057a-aedf-4a50-a7a4-ae7360212301" containerID="2d7e92ebd51973c670187110ca04d0f32591dd09084fab86df3b6e403141e7e6" exitCode=0 Nov 26 07:04:35 crc kubenswrapper[4909]: I1126 07:04:35.431108 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq92" event={"ID":"7ffc057a-aedf-4a50-a7a4-ae7360212301","Type":"ContainerDied","Data":"2d7e92ebd51973c670187110ca04d0f32591dd09084fab86df3b6e403141e7e6"} Nov 26 07:04:35 crc kubenswrapper[4909]: I1126 07:04:35.448797 4909 generic.go:334] "Generic (PLEG): container finished" podID="105cf0ca-2270-45cc-b9ba-0e1cad52d688" containerID="35df5eddfc0317288c1bbdb0b3c0641be390e00e17a37fd7a4a202c396487be2" exitCode=0 Nov 26 07:04:35 crc kubenswrapper[4909]: I1126 07:04:35.448900 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sxgl" event={"ID":"105cf0ca-2270-45cc-b9ba-0e1cad52d688","Type":"ContainerDied","Data":"35df5eddfc0317288c1bbdb0b3c0641be390e00e17a37fd7a4a202c396487be2"} Nov 26 07:04:36 crc kubenswrapper[4909]: I1126 07:04:36.479788 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtq92" event={"ID":"7ffc057a-aedf-4a50-a7a4-ae7360212301","Type":"ContainerStarted","Data":"3b638d776882fcf943df4b46916c9582b6ebf710731d7d86a120f45580b88792"} Nov 26 07:04:36 crc kubenswrapper[4909]: I1126 07:04:36.481604 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sxgl" event={"ID":"105cf0ca-2270-45cc-b9ba-0e1cad52d688","Type":"ContainerStarted","Data":"3e3a3f288d4080f7fa96a6af921537db27675e7c249018827a1d2cee857f7202"} Nov 26 07:04:36 crc kubenswrapper[4909]: I1126 07:04:36.500087 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtq92" podStartSLOduration=2.697093506 podStartE2EDuration="6.500069297s" podCreationTimestamp="2025-11-26 07:04:30 +0000 UTC" firstStartedPulling="2025-11-26 07:04:32.395156286 +0000 UTC m=+244.541367452" lastFinishedPulling="2025-11-26 07:04:36.198132077 +0000 UTC m=+248.344343243" observedRunningTime="2025-11-26 07:04:36.495006162 +0000 UTC m=+248.641217328" watchObservedRunningTime="2025-11-26 07:04:36.500069297 +0000 UTC m=+248.646280483" Nov 26 07:04:36 crc kubenswrapper[4909]: I1126 07:04:36.509683 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7sxgl" podStartSLOduration=1.8296645310000001 podStartE2EDuration="5.50965898s" podCreationTimestamp="2025-11-26 07:04:31 +0000 UTC" firstStartedPulling="2025-11-26 07:04:32.4051135 +0000 UTC m=+244.551324666" lastFinishedPulling="2025-11-26 07:04:36.085107949 +0000 UTC m=+248.231319115" observedRunningTime="2025-11-26 07:04:36.508399507 +0000 UTC m=+248.654610673" watchObservedRunningTime="2025-11-26 07:04:36.50965898 +0000 UTC m=+248.655870156" Nov 26 07:04:38 crc kubenswrapper[4909]: I1126 07:04:38.814055 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:38 crc kubenswrapper[4909]: I1126 07:04:38.814383 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:38 crc kubenswrapper[4909]: I1126 07:04:38.886406 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:39 crc kubenswrapper[4909]: I1126 07:04:39.004393 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:39 crc kubenswrapper[4909]: I1126 07:04:39.004566 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:39 crc kubenswrapper[4909]: I1126 07:04:39.055832 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:39 crc kubenswrapper[4909]: I1126 07:04:39.538127 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dnb2v" Nov 26 07:04:39 crc kubenswrapper[4909]: I1126 07:04:39.553146 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lfn6t" Nov 26 07:04:41 crc kubenswrapper[4909]: I1126 07:04:41.208001 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:41 crc kubenswrapper[4909]: I1126 07:04:41.210283 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:41 crc kubenswrapper[4909]: I1126 07:04:41.251814 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:41 crc kubenswrapper[4909]: I1126 07:04:41.428496 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:41 crc kubenswrapper[4909]: I1126 07:04:41.428550 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:41 crc kubenswrapper[4909]: I1126 07:04:41.545977 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtq92" Nov 26 07:04:42 crc kubenswrapper[4909]: I1126 07:04:42.470129 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7sxgl" podUID="105cf0ca-2270-45cc-b9ba-0e1cad52d688" containerName="registry-server" probeResult="failure" output=< Nov 26 07:04:42 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 07:04:42 crc kubenswrapper[4909]: > Nov 26 07:04:51 crc kubenswrapper[4909]: I1126 07:04:51.473337 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:04:51 crc kubenswrapper[4909]: I1126 07:04:51.518112 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7sxgl" Nov 26 07:06:07 crc kubenswrapper[4909]: I1126 07:06:07.300920 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:06:07 crc kubenswrapper[4909]: I1126 07:06:07.301720 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:06:37 crc kubenswrapper[4909]: I1126 07:06:37.301499 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:06:37 crc kubenswrapper[4909]: I1126 07:06:37.302878 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:07:07 crc kubenswrapper[4909]: I1126 07:07:07.301382 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:07:07 crc kubenswrapper[4909]: I1126 07:07:07.302018 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:07:07 crc kubenswrapper[4909]: I1126 07:07:07.302072 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:07:07 crc kubenswrapper[4909]: I1126 07:07:07.304207 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6aa4bfaa92cc741c58fea8e96b8993071a005ef1d633d1dadf1211dbb440e70"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:07:07 crc kubenswrapper[4909]: I1126 07:07:07.304354 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://d6aa4bfaa92cc741c58fea8e96b8993071a005ef1d633d1dadf1211dbb440e70" gracePeriod=600 Nov 26 07:07:08 crc kubenswrapper[4909]: I1126 07:07:08.397275 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="d6aa4bfaa92cc741c58fea8e96b8993071a005ef1d633d1dadf1211dbb440e70" exitCode=0 Nov 26 07:07:08 crc kubenswrapper[4909]: I1126 07:07:08.397369 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"d6aa4bfaa92cc741c58fea8e96b8993071a005ef1d633d1dadf1211dbb440e70"} Nov 26 07:07:08 crc kubenswrapper[4909]: I1126 07:07:08.397724 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"078c5f364f15712f7c294800057e109895b88211acfc083adc8b6dc0d2e41112"} Nov 26 07:07:08 crc kubenswrapper[4909]: I1126 07:07:08.397747 4909 scope.go:117] "RemoveContainer" containerID="f22f532467357dc9c5d21c12ed17e97b140e685072335cf86878cd1a160297bb" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.547105 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-75xcx"] Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.548516 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.568272 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-75xcx"] Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672702 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-registry-tls\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672758 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672795 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48835eb5-dc0a-4083-9764-58e3cb78d1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672820 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48835eb5-dc0a-4083-9764-58e3cb78d1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672837 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672867 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48835eb5-dc0a-4083-9764-58e3cb78d1b3-trusted-ca\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672903 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hzk\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-kube-api-access-k6hzk\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.672922 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48835eb5-dc0a-4083-9764-58e3cb78d1b3-registry-certificates\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.696849 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.773966 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.774410 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48835eb5-dc0a-4083-9764-58e3cb78d1b3-trusted-ca\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.774441 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hzk\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-kube-api-access-k6hzk\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.774469 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48835eb5-dc0a-4083-9764-58e3cb78d1b3-registry-certificates\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.774529 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-registry-tls\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.774570 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48835eb5-dc0a-4083-9764-58e3cb78d1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.774616 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48835eb5-dc0a-4083-9764-58e3cb78d1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.775395 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48835eb5-dc0a-4083-9764-58e3cb78d1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.776420 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48835eb5-dc0a-4083-9764-58e3cb78d1b3-trusted-ca\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.777374 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48835eb5-dc0a-4083-9764-58e3cb78d1b3-registry-certificates\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.784788 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48835eb5-dc0a-4083-9764-58e3cb78d1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.784801 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-registry-tls\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.801031 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.806216 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hzk\" (UniqueName: \"kubernetes.io/projected/48835eb5-dc0a-4083-9764-58e3cb78d1b3-kube-api-access-k6hzk\") pod \"image-registry-66df7c8f76-75xcx\" (UID: \"48835eb5-dc0a-4083-9764-58e3cb78d1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:33 crc kubenswrapper[4909]: I1126 07:07:33.864559 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:34 crc kubenswrapper[4909]: I1126 07:07:34.052666 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-75xcx"] Nov 26 07:07:34 crc kubenswrapper[4909]: I1126 07:07:34.551391 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" event={"ID":"48835eb5-dc0a-4083-9764-58e3cb78d1b3","Type":"ContainerStarted","Data":"ca7a57c3e81a1e86828cd74195c281370b6e59e97e2f926555f8b607b812144a"} Nov 26 07:07:34 crc kubenswrapper[4909]: I1126 07:07:34.551647 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" event={"ID":"48835eb5-dc0a-4083-9764-58e3cb78d1b3","Type":"ContainerStarted","Data":"216f5d49e47912f486a83045cbb8adbb853283dae7b2fa4e7570301c2051d9dd"} Nov 26 07:07:34 crc kubenswrapper[4909]: I1126 07:07:34.551802 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:53 crc kubenswrapper[4909]: I1126 07:07:53.871063 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" Nov 26 07:07:53 crc kubenswrapper[4909]: I1126 07:07:53.895209 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-75xcx" podStartSLOduration=20.895183469 podStartE2EDuration="20.895183469s" podCreationTimestamp="2025-11-26 07:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:07:34.570736144 +0000 UTC m=+426.716947380" watchObservedRunningTime="2025-11-26 07:07:53.895183469 +0000 UTC m=+446.041394635" Nov 26 07:07:53 crc kubenswrapper[4909]: I1126 07:07:53.922426 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wqlmg"] Nov 26 07:08:18 crc kubenswrapper[4909]: I1126 07:08:18.967365 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" podUID="03e3a595-33da-47a5-ba74-cb7c535134ca" containerName="registry" containerID="cri-o://94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087" gracePeriod=30 Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.412457 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544283 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03e3a595-33da-47a5-ba74-cb7c535134ca-ca-trust-extracted\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544384 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjrht\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-kube-api-access-kjrht\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544545 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544662 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-trusted-ca\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544704 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-certificates\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544776 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-tls\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544808 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03e3a595-33da-47a5-ba74-cb7c535134ca-installation-pull-secrets\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.544836 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-bound-sa-token\") pod \"03e3a595-33da-47a5-ba74-cb7c535134ca\" (UID: \"03e3a595-33da-47a5-ba74-cb7c535134ca\") " Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.545860 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.546028 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.551682 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.553514 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e3a595-33da-47a5-ba74-cb7c535134ca-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.554980 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-kube-api-access-kjrht" (OuterVolumeSpecName: "kube-api-access-kjrht") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "kube-api-access-kjrht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.555510 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.560314 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.563186 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e3a595-33da-47a5-ba74-cb7c535134ca-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "03e3a595-33da-47a5-ba74-cb7c535134ca" (UID: "03e3a595-33da-47a5-ba74-cb7c535134ca"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.646870 4909 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.646918 4909 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03e3a595-33da-47a5-ba74-cb7c535134ca-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.646935 4909 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.646949 4909 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03e3a595-33da-47a5-ba74-cb7c535134ca-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.646963 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjrht\" (UniqueName: \"kubernetes.io/projected/03e3a595-33da-47a5-ba74-cb7c535134ca-kube-api-access-kjrht\") on node \"crc\" DevicePath \"\"" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.646974 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.646985 4909 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03e3a595-33da-47a5-ba74-cb7c535134ca-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.823941 4909 generic.go:334] "Generic (PLEG): container finished" podID="03e3a595-33da-47a5-ba74-cb7c535134ca" containerID="94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087" exitCode=0 Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.823996 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" event={"ID":"03e3a595-33da-47a5-ba74-cb7c535134ca","Type":"ContainerDied","Data":"94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087"} Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.824043 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" event={"ID":"03e3a595-33da-47a5-ba74-cb7c535134ca","Type":"ContainerDied","Data":"7d9f137e482a429bae52df64157bdb58b89d25dca6f51affacee0fa28bbfb306"} Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.824065 4909 scope.go:117] "RemoveContainer" containerID="94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.824501 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wqlmg" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.846194 4909 scope.go:117] "RemoveContainer" containerID="94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087" Nov 26 07:08:19 crc kubenswrapper[4909]: E1126 07:08:19.846884 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087\": container with ID starting with 94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087 not found: ID does not exist" containerID="94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.847020 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087"} err="failed to get container status \"94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087\": rpc error: code = NotFound desc = could not find container \"94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087\": container with ID starting with 94e4bcf39a67d8e36f584e335fc0e95b9b23fbd3bcfc1769aea074cc0a1e6087 not found: ID does not exist" Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.865741 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wqlmg"] Nov 26 07:08:19 crc kubenswrapper[4909]: I1126 07:08:19.869300 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wqlmg"] Nov 26 07:08:20 crc kubenswrapper[4909]: I1126 07:08:20.509164 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e3a595-33da-47a5-ba74-cb7c535134ca" path="/var/lib/kubelet/pods/03e3a595-33da-47a5-ba74-cb7c535134ca/volumes" Nov 26 07:09:07 crc kubenswrapper[4909]: I1126 07:09:07.300973 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:09:07 crc kubenswrapper[4909]: I1126 07:09:07.301544 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:09:37 crc kubenswrapper[4909]: I1126 07:09:37.301042 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:09:37 crc kubenswrapper[4909]: I1126 07:09:37.301621 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.301668 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.302300 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.302374 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.303294 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"078c5f364f15712f7c294800057e109895b88211acfc083adc8b6dc0d2e41112"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.303398 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://078c5f364f15712f7c294800057e109895b88211acfc083adc8b6dc0d2e41112" gracePeriod=600 Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.559326 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="078c5f364f15712f7c294800057e109895b88211acfc083adc8b6dc0d2e41112" exitCode=0 Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.559415 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"078c5f364f15712f7c294800057e109895b88211acfc083adc8b6dc0d2e41112"} Nov 26 07:10:07 crc kubenswrapper[4909]: I1126 07:10:07.559761 4909 scope.go:117] "RemoveContainer" containerID="d6aa4bfaa92cc741c58fea8e96b8993071a005ef1d633d1dadf1211dbb440e70" Nov 26 07:10:08 crc kubenswrapper[4909]: I1126 07:10:08.572079 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"d4c41dceb1d36b1bd9dc641bc254e711b1901f8039253b64ac1c47b6d8051be7"} Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.309365 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78qth"] Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.310611 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-controller" containerID="cri-o://3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.310684 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="nbdb" containerID="cri-o://c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.310761 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="sbdb" containerID="cri-o://1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.310815 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-acl-logging" containerID="cri-o://adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.310790 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.310814 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-node" containerID="cri-o://3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.310890 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="northd" containerID="cri-o://72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.348442 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" containerID="cri-o://2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" gracePeriod=30 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.663579 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/3.log" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.665693 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovn-acl-logging/0.log" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.666178 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovn-controller/0.log" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.666542 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712023 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jz8n4"] Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712201 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712212 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712219 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712225 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712231 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712239 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712247 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-node" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712252 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-node" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712260 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="sbdb" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712266 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="sbdb" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712274 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712279 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712286 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kubecfg-setup" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712293 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kubecfg-setup" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712302 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712307 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712314 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="nbdb" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712320 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="nbdb" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712328 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="northd" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712333 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="northd" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712341 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e3a595-33da-47a5-ba74-cb7c535134ca" containerName="registry" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712347 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e3a595-33da-47a5-ba74-cb7c535134ca" containerName="registry" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712356 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-acl-logging" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712361 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-acl-logging" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712442 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712451 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="sbdb" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712457 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712464 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e3a595-33da-47a5-ba74-cb7c535134ca" containerName="registry" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712472 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="kube-rbac-proxy-node" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712481 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712488 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="nbdb" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712497 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712505 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="northd" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712512 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712519 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712526 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovn-acl-logging" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712607 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712627 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.712636 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712642 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.712722 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerName="ovnkube-controller" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.714895 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830203 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-ovn-kubernetes\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830306 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-kubelet\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830360 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830390 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-openvswitch\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830467 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-ovn\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830417 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830532 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-log-socket\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830563 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-systemd\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830464 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830566 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830628 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovn-node-metrics-cert\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830654 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-netd\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830673 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-systemd-units\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830748 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8scj\" (UniqueName: \"kubernetes.io/projected/bbfa11b9-2582-454a-9a97-63d505eccc8b-kube-api-access-s8scj\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830628 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-log-socket" (OuterVolumeSpecName: "log-socket") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830798 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830767 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830843 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-netns\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830924 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.830991 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831022 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-var-lib-openvswitch\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831078 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-bin\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831172 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831253 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-slash" (OuterVolumeSpecName: "host-slash") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831288 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-slash\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831362 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-script-lib\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831513 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-node-log" (OuterVolumeSpecName: "node-log") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831467 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-node-log\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831642 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-config\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831718 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-env-overrides\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.831805 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832087 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832112 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832173 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832235 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-etc-openvswitch\") pod \"bbfa11b9-2582-454a-9a97-63d505eccc8b\" (UID: \"bbfa11b9-2582-454a-9a97-63d505eccc8b\") " Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832169 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832380 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832522 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-systemd\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832624 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovnkube-script-lib\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832816 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkmmc\" (UniqueName: \"kubernetes.io/projected/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-kube-api-access-fkmmc\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832869 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-var-lib-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832901 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-kubelet\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832928 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-etc-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.832961 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-log-socket\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833010 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-slash\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833056 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-cni-netd\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833117 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-systemd-units\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833145 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833166 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovnkube-config\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833213 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovn-node-metrics-cert\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833343 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833386 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-cni-bin\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833446 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-run-netns\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833488 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-node-log\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833521 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833550 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-ovn\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833580 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-env-overrides\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833712 4909 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833779 4909 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833797 4909 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833811 4909 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833823 4909 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833835 4909 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833845 4909 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-log-socket\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833855 4909 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833866 4909 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833879 4909 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833892 4909 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833905 4909 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833921 4909 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-host-slash\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833934 4909 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833945 4909 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-node-log\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833958 4909 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.833972 4909 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbfa11b9-2582-454a-9a97-63d505eccc8b-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.837563 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.837577 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfa11b9-2582-454a-9a97-63d505eccc8b-kube-api-access-s8scj" (OuterVolumeSpecName: "kube-api-access-s8scj") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "kube-api-access-s8scj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.844916 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bbfa11b9-2582-454a-9a97-63d505eccc8b" (UID: "bbfa11b9-2582-454a-9a97-63d505eccc8b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935689 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935772 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-cni-bin\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935810 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-run-netns\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935854 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-cni-bin\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935854 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-node-log\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935900 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-node-log\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935915 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935938 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-ovn\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935954 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-env-overrides\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935977 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-systemd\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.935998 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovnkube-script-lib\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936021 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkmmc\" (UniqueName: \"kubernetes.io/projected/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-kube-api-access-fkmmc\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936039 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-var-lib-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936039 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-run-netns\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936099 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936042 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-ovn\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936066 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-systemd\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936072 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-kubelet\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936053 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-kubelet\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936259 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-var-lib-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936329 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-etc-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936401 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-etc-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936450 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-log-socket\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936487 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-slash\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936636 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-cni-netd\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936687 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-cni-netd\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936643 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-slash\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936544 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-log-socket\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936746 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-systemd-units\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936810 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovnkube-script-lib\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936831 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-systemd-units\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936856 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936895 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936919 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovnkube-config\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936954 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-run-openvswitch\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.936980 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovn-node-metrics-cert\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.937041 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-env-overrides\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.937233 4909 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbfa11b9-2582-454a-9a97-63d505eccc8b-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.937258 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbfa11b9-2582-454a-9a97-63d505eccc8b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.937276 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8scj\" (UniqueName: \"kubernetes.io/projected/bbfa11b9-2582-454a-9a97-63d505eccc8b-kube-api-access-s8scj\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.937672 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovnkube-config\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.940746 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-ovn-node-metrics-cert\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.965964 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkmmc\" (UniqueName: \"kubernetes.io/projected/86ed9c94-e66e-4d26-abdc-4fa12fb3772d-kube-api-access-fkmmc\") pod \"ovnkube-node-jz8n4\" (UID: \"86ed9c94-e66e-4d26-abdc-4fa12fb3772d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.992713 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/2.log" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.993319 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/1.log" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.993373 4909 generic.go:334] "Generic (PLEG): container finished" podID="3d586ea3-b189-476f-9e44-4579388f3107" containerID="0adb77440dd3fcd99f6d9a0e77ab2d7cb635055d8ae27a82d06d45441a542384" exitCode=2 Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.993433 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerDied","Data":"0adb77440dd3fcd99f6d9a0e77ab2d7cb635055d8ae27a82d06d45441a542384"} Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.993467 4909 scope.go:117] "RemoveContainer" containerID="e117672004d154d1b63fbd5d09455e6439e6b95c2bc027957d3f39ef745c79be" Nov 26 07:11:17 crc kubenswrapper[4909]: I1126 07:11:17.994254 4909 scope.go:117] "RemoveContainer" containerID="0adb77440dd3fcd99f6d9a0e77ab2d7cb635055d8ae27a82d06d45441a542384" Nov 26 07:11:17 crc kubenswrapper[4909]: E1126 07:11:17.994816 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6b4ts_openshift-multus(3d586ea3-b189-476f-9e44-4579388f3107)\"" pod="openshift-multus/multus-6b4ts" podUID="3d586ea3-b189-476f-9e44-4579388f3107" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.001706 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovnkube-controller/3.log" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.004243 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovn-acl-logging/0.log" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.004860 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-78qth_bbfa11b9-2582-454a-9a97-63d505eccc8b/ovn-controller/0.log" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006355 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" exitCode=0 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006392 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" exitCode=0 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006401 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" exitCode=0 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006410 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" exitCode=0 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006421 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" exitCode=0 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006453 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" exitCode=0 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006465 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" exitCode=143 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006473 4909 generic.go:334] "Generic (PLEG): container finished" podID="bbfa11b9-2582-454a-9a97-63d505eccc8b" containerID="3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" exitCode=143 Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006497 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006530 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006552 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006564 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006578 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006591 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006625 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006640 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006646 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006652 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006658 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006664 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006670 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006675 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006680 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006685 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006694 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006707 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006715 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006721 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006726 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006732 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006737 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006742 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006747 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006753 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006759 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006767 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006692 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.006775 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.007933 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008000 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008051 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008103 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008149 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008198 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008246 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008295 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008344 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008411 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-78qth" event={"ID":"bbfa11b9-2582-454a-9a97-63d505eccc8b","Type":"ContainerDied","Data":"052cea06e781cab69fe47aef87dfd12543446ec70651b0b66677e37c3391ee9b"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008478 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008542 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008594 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008672 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008720 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008764 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008818 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008881 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008931 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.008978 4909 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.033136 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.037069 4909 scope.go:117] "RemoveContainer" containerID="2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.063383 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78qth"] Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.067116 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-78qth"] Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.069681 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.086828 4909 scope.go:117] "RemoveContainer" containerID="1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.111197 4909 scope.go:117] "RemoveContainer" containerID="c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.126518 4909 scope.go:117] "RemoveContainer" containerID="72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.149682 4909 scope.go:117] "RemoveContainer" containerID="535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.162317 4909 scope.go:117] "RemoveContainer" containerID="3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.176574 4909 scope.go:117] "RemoveContainer" containerID="adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.204794 4909 scope.go:117] "RemoveContainer" containerID="3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.266209 4909 scope.go:117] "RemoveContainer" containerID="ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.279426 4909 scope.go:117] "RemoveContainer" containerID="2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.279885 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": container with ID starting with 2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4 not found: ID does not exist" containerID="2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.279931 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} err="failed to get container status \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": rpc error: code = NotFound desc = could not find container \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": container with ID starting with 2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.279959 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.280329 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": container with ID starting with 4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39 not found: ID does not exist" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.280367 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} err="failed to get container status \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": rpc error: code = NotFound desc = could not find container \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": container with ID starting with 4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.280421 4909 scope.go:117] "RemoveContainer" containerID="1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.281033 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": container with ID starting with 1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282 not found: ID does not exist" containerID="1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.281539 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} err="failed to get container status \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": rpc error: code = NotFound desc = could not find container \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": container with ID starting with 1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.281641 4909 scope.go:117] "RemoveContainer" containerID="c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.282049 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": container with ID starting with c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7 not found: ID does not exist" containerID="c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282076 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} err="failed to get container status \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": rpc error: code = NotFound desc = could not find container \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": container with ID starting with c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282095 4909 scope.go:117] "RemoveContainer" containerID="72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.282325 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": container with ID starting with 72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce not found: ID does not exist" containerID="72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282358 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} err="failed to get container status \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": rpc error: code = NotFound desc = could not find container \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": container with ID starting with 72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282378 4909 scope.go:117] "RemoveContainer" containerID="535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.282599 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": container with ID starting with 535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779 not found: ID does not exist" containerID="535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282637 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} err="failed to get container status \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": rpc error: code = NotFound desc = could not find container \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": container with ID starting with 535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282652 4909 scope.go:117] "RemoveContainer" containerID="3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.282862 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": container with ID starting with 3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85 not found: ID does not exist" containerID="3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282895 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} err="failed to get container status \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": rpc error: code = NotFound desc = could not find container \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": container with ID starting with 3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.282914 4909 scope.go:117] "RemoveContainer" containerID="adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.283444 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": container with ID starting with adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314 not found: ID does not exist" containerID="adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.283473 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} err="failed to get container status \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": rpc error: code = NotFound desc = could not find container \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": container with ID starting with adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.283493 4909 scope.go:117] "RemoveContainer" containerID="3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.283798 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": container with ID starting with 3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495 not found: ID does not exist" containerID="3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.283856 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} err="failed to get container status \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": rpc error: code = NotFound desc = could not find container \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": container with ID starting with 3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.283892 4909 scope.go:117] "RemoveContainer" containerID="ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef" Nov 26 07:11:18 crc kubenswrapper[4909]: E1126 07:11:18.284165 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": container with ID starting with ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef not found: ID does not exist" containerID="ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.284188 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} err="failed to get container status \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": rpc error: code = NotFound desc = could not find container \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": container with ID starting with ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.284204 4909 scope.go:117] "RemoveContainer" containerID="2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.291041 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} err="failed to get container status \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": rpc error: code = NotFound desc = could not find container \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": container with ID starting with 2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.291080 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.291470 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} err="failed to get container status \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": rpc error: code = NotFound desc = could not find container \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": container with ID starting with 4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.291520 4909 scope.go:117] "RemoveContainer" containerID="1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.291838 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} err="failed to get container status \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": rpc error: code = NotFound desc = could not find container \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": container with ID starting with 1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.291852 4909 scope.go:117] "RemoveContainer" containerID="c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.292042 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} err="failed to get container status \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": rpc error: code = NotFound desc = could not find container \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": container with ID starting with c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.292054 4909 scope.go:117] "RemoveContainer" containerID="72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.292242 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} err="failed to get container status \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": rpc error: code = NotFound desc = could not find container \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": container with ID starting with 72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.292286 4909 scope.go:117] "RemoveContainer" containerID="535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.292497 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} err="failed to get container status \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": rpc error: code = NotFound desc = could not find container \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": container with ID starting with 535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.292514 4909 scope.go:117] "RemoveContainer" containerID="3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.294743 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} err="failed to get container status \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": rpc error: code = NotFound desc = could not find container \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": container with ID starting with 3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.294770 4909 scope.go:117] "RemoveContainer" containerID="adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.294976 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} err="failed to get container status \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": rpc error: code = NotFound desc = could not find container \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": container with ID starting with adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.294992 4909 scope.go:117] "RemoveContainer" containerID="3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.296603 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} err="failed to get container status \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": rpc error: code = NotFound desc = could not find container \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": container with ID starting with 3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.296621 4909 scope.go:117] "RemoveContainer" containerID="ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.296912 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} err="failed to get container status \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": rpc error: code = NotFound desc = could not find container \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": container with ID starting with ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.296927 4909 scope.go:117] "RemoveContainer" containerID="2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.297677 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} err="failed to get container status \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": rpc error: code = NotFound desc = could not find container \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": container with ID starting with 2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.297738 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.298050 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} err="failed to get container status \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": rpc error: code = NotFound desc = could not find container \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": container with ID starting with 4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.298071 4909 scope.go:117] "RemoveContainer" containerID="1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.298377 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} err="failed to get container status \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": rpc error: code = NotFound desc = could not find container \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": container with ID starting with 1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.298403 4909 scope.go:117] "RemoveContainer" containerID="c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.298943 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} err="failed to get container status \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": rpc error: code = NotFound desc = could not find container \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": container with ID starting with c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.299008 4909 scope.go:117] "RemoveContainer" containerID="72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.299440 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} err="failed to get container status \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": rpc error: code = NotFound desc = could not find container \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": container with ID starting with 72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.299490 4909 scope.go:117] "RemoveContainer" containerID="535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.299842 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} err="failed to get container status \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": rpc error: code = NotFound desc = could not find container \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": container with ID starting with 535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.299869 4909 scope.go:117] "RemoveContainer" containerID="3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.300533 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} err="failed to get container status \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": rpc error: code = NotFound desc = could not find container \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": container with ID starting with 3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.300718 4909 scope.go:117] "RemoveContainer" containerID="adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.301248 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} err="failed to get container status \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": rpc error: code = NotFound desc = could not find container \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": container with ID starting with adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.301270 4909 scope.go:117] "RemoveContainer" containerID="3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.301691 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} err="failed to get container status \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": rpc error: code = NotFound desc = could not find container \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": container with ID starting with 3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.301733 4909 scope.go:117] "RemoveContainer" containerID="ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.302081 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} err="failed to get container status \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": rpc error: code = NotFound desc = could not find container \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": container with ID starting with ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.302101 4909 scope.go:117] "RemoveContainer" containerID="2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.302385 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4"} err="failed to get container status \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": rpc error: code = NotFound desc = could not find container \"2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4\": container with ID starting with 2b2559f6ca24d98810bd5b332b823bf293d597eb78b637af331bf91364e164b4 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.302408 4909 scope.go:117] "RemoveContainer" containerID="4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.302792 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39"} err="failed to get container status \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": rpc error: code = NotFound desc = could not find container \"4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39\": container with ID starting with 4fca76c708378a347ba8e87752e76f62fe179fb09d2bd0b81fb4377435003d39 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.302819 4909 scope.go:117] "RemoveContainer" containerID="1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.303141 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282"} err="failed to get container status \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": rpc error: code = NotFound desc = could not find container \"1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282\": container with ID starting with 1ae08e8f61a3023c8dbacbef71a1bbd97ee3b2075438fe4b05863c5fbe0f2282 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.303163 4909 scope.go:117] "RemoveContainer" containerID="c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.303452 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7"} err="failed to get container status \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": rpc error: code = NotFound desc = could not find container \"c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7\": container with ID starting with c00800130b2c434a1074725e06cbf3d5f99931cd4d631c1e9a06b95b6cd4b1a7 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.303471 4909 scope.go:117] "RemoveContainer" containerID="72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.303758 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce"} err="failed to get container status \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": rpc error: code = NotFound desc = could not find container \"72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce\": container with ID starting with 72d59bb3360312c854eae96a24511b779e080d527e452b10fa9838b2969336ce not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.303801 4909 scope.go:117] "RemoveContainer" containerID="535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.304153 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779"} err="failed to get container status \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": rpc error: code = NotFound desc = could not find container \"535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779\": container with ID starting with 535c87f921391c763ab13113de717e13a2d1bd2145f4883bb284bc62cc9f6779 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.304180 4909 scope.go:117] "RemoveContainer" containerID="3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.304528 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85"} err="failed to get container status \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": rpc error: code = NotFound desc = could not find container \"3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85\": container with ID starting with 3a4f64f5ead0619292058418451f3ee19b962a93a580e86d1b5e397ec38d9c85 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.304559 4909 scope.go:117] "RemoveContainer" containerID="adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.305059 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314"} err="failed to get container status \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": rpc error: code = NotFound desc = could not find container \"adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314\": container with ID starting with adbf07771a68082f866b22a566186dd61453a370c6cfe0a19107b34de90ca314 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.305087 4909 scope.go:117] "RemoveContainer" containerID="3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.305422 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495"} err="failed to get container status \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": rpc error: code = NotFound desc = could not find container \"3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495\": container with ID starting with 3fd50396bf91553a337452b206bb3824151cbfb278cf514589a38011343fa495 not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.305481 4909 scope.go:117] "RemoveContainer" containerID="ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.305849 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef"} err="failed to get container status \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": rpc error: code = NotFound desc = could not find container \"ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef\": container with ID starting with ddc35a919b76adb9965224986887358bb0edf01c899047ca748591e02a0800ef not found: ID does not exist" Nov 26 07:11:18 crc kubenswrapper[4909]: I1126 07:11:18.512932 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfa11b9-2582-454a-9a97-63d505eccc8b" path="/var/lib/kubelet/pods/bbfa11b9-2582-454a-9a97-63d505eccc8b/volumes" Nov 26 07:11:19 crc kubenswrapper[4909]: I1126 07:11:19.017582 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/2.log" Nov 26 07:11:19 crc kubenswrapper[4909]: I1126 07:11:19.019999 4909 generic.go:334] "Generic (PLEG): container finished" podID="86ed9c94-e66e-4d26-abdc-4fa12fb3772d" containerID="2d50ffff36f42ec8b55f5fac19f3c793ccddc8b42c7b57a77570ce35b2c99349" exitCode=0 Nov 26 07:11:19 crc kubenswrapper[4909]: I1126 07:11:19.020035 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerDied","Data":"2d50ffff36f42ec8b55f5fac19f3c793ccddc8b42c7b57a77570ce35b2c99349"} Nov 26 07:11:19 crc kubenswrapper[4909]: I1126 07:11:19.020056 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"7379c88c2d34534c60965fe5664e6446c6c67c4ac3c6298375b06df2086e9196"} Nov 26 07:11:20 crc kubenswrapper[4909]: I1126 07:11:20.030205 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"12aecfa02e201943ea03f99c62d522e5eb2aa3678f515eb17edc2b7808928f80"} Nov 26 07:11:20 crc kubenswrapper[4909]: I1126 07:11:20.030773 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"4a9390efd804e648ab8f3d490be7d034c25a42969add0816ebbe8696cc69980c"} Nov 26 07:11:20 crc kubenswrapper[4909]: I1126 07:11:20.030788 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"ed5e11f30f4f58fc8fe18952ba3f98cc4077bb1b1fac3acdcf3fa928f2a2d51f"} Nov 26 07:11:20 crc kubenswrapper[4909]: I1126 07:11:20.030799 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"5eb6a1de9c2c7a31b68652bcd9e4e7049f23d0545893e420a30a178b610d3cf1"} Nov 26 07:11:20 crc kubenswrapper[4909]: I1126 07:11:20.030810 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"70420a0d19a78d25fb6b105db6315f67808b652c412302b066a7660d7d14e61f"} Nov 26 07:11:20 crc kubenswrapper[4909]: I1126 07:11:20.030821 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"838a642d219bbe4a4fb570426f859b419803c72f0bae9496cbc2f5672f20d54c"} Nov 26 07:11:22 crc kubenswrapper[4909]: I1126 07:11:22.049633 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"07190e9f6e37229d2a2fbb37df33c583d38dbcdce0c0ffc2b7c20e04f779724f"} Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.612507 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-744cq"] Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.613869 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.616761 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.616958 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.617278 4909 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-lv8rh" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.617762 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.714776 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzfs\" (UniqueName: \"kubernetes.io/projected/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-kube-api-access-wlzfs\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.714829 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-crc-storage\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.714871 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-node-mnt\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.815504 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlzfs\" (UniqueName: \"kubernetes.io/projected/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-kube-api-access-wlzfs\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.815544 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-crc-storage\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.815575 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-node-mnt\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.815812 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-node-mnt\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.816336 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-crc-storage\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.842806 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlzfs\" (UniqueName: \"kubernetes.io/projected/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-kube-api-access-wlzfs\") pod \"crc-storage-crc-744cq\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: I1126 07:11:23.927266 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: E1126 07:11:23.957864 4909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(5b3262b57373ecc6049609772d0efa2273f2ac3714ec485393bee24285f4069f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 07:11:23 crc kubenswrapper[4909]: E1126 07:11:23.957968 4909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(5b3262b57373ecc6049609772d0efa2273f2ac3714ec485393bee24285f4069f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: E1126 07:11:23.957993 4909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(5b3262b57373ecc6049609772d0efa2273f2ac3714ec485393bee24285f4069f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:23 crc kubenswrapper[4909]: E1126 07:11:23.958033 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-744cq_crc-storage(c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-744cq_crc-storage(c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(5b3262b57373ecc6049609772d0efa2273f2ac3714ec485393bee24285f4069f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-744cq" podUID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.041300 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-744cq"] Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.041760 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.042322 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:25 crc kubenswrapper[4909]: E1126 07:11:25.064922 4909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(955316f67454b000660640af7cd8db46b8e1fe39d9104035fab522d5e1bbf281): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 07:11:25 crc kubenswrapper[4909]: E1126 07:11:25.064987 4909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(955316f67454b000660640af7cd8db46b8e1fe39d9104035fab522d5e1bbf281): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:25 crc kubenswrapper[4909]: E1126 07:11:25.065007 4909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(955316f67454b000660640af7cd8db46b8e1fe39d9104035fab522d5e1bbf281): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:25 crc kubenswrapper[4909]: E1126 07:11:25.065066 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-744cq_crc-storage(c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-744cq_crc-storage(c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(955316f67454b000660640af7cd8db46b8e1fe39d9104035fab522d5e1bbf281): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-744cq" podUID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.071816 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" event={"ID":"86ed9c94-e66e-4d26-abdc-4fa12fb3772d","Type":"ContainerStarted","Data":"be0e10312127ddbc942a55349895a5458b67f66be1788805264661736dbb1536"} Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.072201 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.072242 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.096572 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" podStartSLOduration=8.096547351 podStartE2EDuration="8.096547351s" podCreationTimestamp="2025-11-26 07:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:11:25.095960945 +0000 UTC m=+657.242172121" watchObservedRunningTime="2025-11-26 07:11:25.096547351 +0000 UTC m=+657.242758517" Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.100444 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:25 crc kubenswrapper[4909]: I1126 07:11:25.110510 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:26 crc kubenswrapper[4909]: I1126 07:11:26.076910 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:29 crc kubenswrapper[4909]: I1126 07:11:29.498540 4909 scope.go:117] "RemoveContainer" containerID="0adb77440dd3fcd99f6d9a0e77ab2d7cb635055d8ae27a82d06d45441a542384" Nov 26 07:11:29 crc kubenswrapper[4909]: E1126 07:11:29.499027 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6b4ts_openshift-multus(3d586ea3-b189-476f-9e44-4579388f3107)\"" pod="openshift-multus/multus-6b4ts" podUID="3d586ea3-b189-476f-9e44-4579388f3107" Nov 26 07:11:37 crc kubenswrapper[4909]: I1126 07:11:37.498704 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:37 crc kubenswrapper[4909]: I1126 07:11:37.499446 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:37 crc kubenswrapper[4909]: E1126 07:11:37.550304 4909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(58601407eaef66eccb65747b8ff3509e8141679010d7b4ac48f08e6748b05bea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 07:11:37 crc kubenswrapper[4909]: E1126 07:11:37.551861 4909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(58601407eaef66eccb65747b8ff3509e8141679010d7b4ac48f08e6748b05bea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:37 crc kubenswrapper[4909]: E1126 07:11:37.552206 4909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(58601407eaef66eccb65747b8ff3509e8141679010d7b4ac48f08e6748b05bea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:37 crc kubenswrapper[4909]: E1126 07:11:37.552572 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-744cq_crc-storage(c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-744cq_crc-storage(c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-744cq_crc-storage_c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd_0(58601407eaef66eccb65747b8ff3509e8141679010d7b4ac48f08e6748b05bea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-744cq" podUID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" Nov 26 07:11:44 crc kubenswrapper[4909]: I1126 07:11:44.498926 4909 scope.go:117] "RemoveContainer" containerID="0adb77440dd3fcd99f6d9a0e77ab2d7cb635055d8ae27a82d06d45441a542384" Nov 26 07:11:45 crc kubenswrapper[4909]: I1126 07:11:45.197479 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6b4ts_3d586ea3-b189-476f-9e44-4579388f3107/kube-multus/2.log" Nov 26 07:11:45 crc kubenswrapper[4909]: I1126 07:11:45.197890 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6b4ts" event={"ID":"3d586ea3-b189-476f-9e44-4579388f3107","Type":"ContainerStarted","Data":"668196e5a337cf8d2c83519b972d9f3849093e127a4651cea55b608632d049bc"} Nov 26 07:11:48 crc kubenswrapper[4909]: I1126 07:11:48.058430 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jz8n4" Nov 26 07:11:49 crc kubenswrapper[4909]: I1126 07:11:49.497926 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:49 crc kubenswrapper[4909]: I1126 07:11:49.500173 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:49 crc kubenswrapper[4909]: I1126 07:11:49.973096 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-744cq"] Nov 26 07:11:49 crc kubenswrapper[4909]: I1126 07:11:49.988840 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 07:11:50 crc kubenswrapper[4909]: I1126 07:11:50.727097 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-744cq" event={"ID":"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd","Type":"ContainerStarted","Data":"f8abdc97e1ba0227c24a647a092195b033438ffa9e7307b0a0f9f8adb2089263"} Nov 26 07:11:52 crc kubenswrapper[4909]: I1126 07:11:52.741420 4909 generic.go:334] "Generic (PLEG): container finished" podID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" containerID="d94813654796775feda9c1790440b220cdc17f20fd05eb78ca9acc78d3d9d895" exitCode=0 Nov 26 07:11:52 crc kubenswrapper[4909]: I1126 07:11:52.741802 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-744cq" event={"ID":"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd","Type":"ContainerDied","Data":"d94813654796775feda9c1790440b220cdc17f20fd05eb78ca9acc78d3d9d895"} Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.089543 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.219023 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-crc-storage\") pod \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.219090 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-node-mnt\") pod \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.219144 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlzfs\" (UniqueName: \"kubernetes.io/projected/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-kube-api-access-wlzfs\") pod \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\" (UID: \"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd\") " Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.219235 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" (UID: "c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.219482 4909 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.227036 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-kube-api-access-wlzfs" (OuterVolumeSpecName: "kube-api-access-wlzfs") pod "c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" (UID: "c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd"). InnerVolumeSpecName "kube-api-access-wlzfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.242492 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" (UID: "c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.320434 4909 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.320474 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlzfs\" (UniqueName: \"kubernetes.io/projected/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd-kube-api-access-wlzfs\") on node \"crc\" DevicePath \"\"" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.754265 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-744cq" event={"ID":"c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd","Type":"ContainerDied","Data":"f8abdc97e1ba0227c24a647a092195b033438ffa9e7307b0a0f9f8adb2089263"} Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.754307 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8abdc97e1ba0227c24a647a092195b033438ffa9e7307b0a0f9f8adb2089263" Nov 26 07:11:54 crc kubenswrapper[4909]: I1126 07:11:54.754365 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-744cq" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.691844 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd"] Nov 26 07:12:00 crc kubenswrapper[4909]: E1126 07:12:00.692736 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" containerName="storage" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.692755 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" containerName="storage" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.693104 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" containerName="storage" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.694173 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.696227 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.703571 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd"] Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.704311 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.704362 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxs6\" (UniqueName: \"kubernetes.io/projected/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-kube-api-access-6vxs6\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.704405 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.805097 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.805186 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.805216 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxs6\" (UniqueName: \"kubernetes.io/projected/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-kube-api-access-6vxs6\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.805568 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.805695 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:00 crc kubenswrapper[4909]: I1126 07:12:00.824253 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxs6\" (UniqueName: \"kubernetes.io/projected/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-kube-api-access-6vxs6\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:01 crc kubenswrapper[4909]: I1126 07:12:01.022439 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:01 crc kubenswrapper[4909]: I1126 07:12:01.231689 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd"] Nov 26 07:12:01 crc kubenswrapper[4909]: I1126 07:12:01.793215 4909 generic.go:334] "Generic (PLEG): container finished" podID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerID="5058d33db20df05521b8eb775702fa534b6190f647102c8232f88cbe1022405b" exitCode=0 Nov 26 07:12:01 crc kubenswrapper[4909]: I1126 07:12:01.793292 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" event={"ID":"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9","Type":"ContainerDied","Data":"5058d33db20df05521b8eb775702fa534b6190f647102c8232f88cbe1022405b"} Nov 26 07:12:01 crc kubenswrapper[4909]: I1126 07:12:01.793341 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" event={"ID":"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9","Type":"ContainerStarted","Data":"d1e808b0baeb806cf62e871a11c150973acf06ea15252c88feea2097705bb54f"} Nov 26 07:12:03 crc kubenswrapper[4909]: I1126 07:12:03.807017 4909 generic.go:334] "Generic (PLEG): container finished" podID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerID="2fd40708ea69098f039306c5a92f43a009968addb8ad045bed37c3517b063508" exitCode=0 Nov 26 07:12:03 crc kubenswrapper[4909]: I1126 07:12:03.807390 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" event={"ID":"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9","Type":"ContainerDied","Data":"2fd40708ea69098f039306c5a92f43a009968addb8ad045bed37c3517b063508"} Nov 26 07:12:04 crc kubenswrapper[4909]: I1126 07:12:04.814750 4909 generic.go:334] "Generic (PLEG): container finished" podID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerID="2be79998d062c5be07d542f3e7409e67417055e8880346fb06f426d3e470cfa8" exitCode=0 Nov 26 07:12:04 crc kubenswrapper[4909]: I1126 07:12:04.814871 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" event={"ID":"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9","Type":"ContainerDied","Data":"2be79998d062c5be07d542f3e7409e67417055e8880346fb06f426d3e470cfa8"} Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.074389 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.207729 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vxs6\" (UniqueName: \"kubernetes.io/projected/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-kube-api-access-6vxs6\") pod \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.207918 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-bundle\") pod \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.208090 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-util\") pod \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\" (UID: \"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9\") " Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.210240 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-bundle" (OuterVolumeSpecName: "bundle") pod "5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" (UID: "5519a2e2-0cf0-441e-b9ed-32b3daf16fc9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.214786 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-kube-api-access-6vxs6" (OuterVolumeSpecName: "kube-api-access-6vxs6") pod "5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" (UID: "5519a2e2-0cf0-441e-b9ed-32b3daf16fc9"). InnerVolumeSpecName "kube-api-access-6vxs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.240485 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-util" (OuterVolumeSpecName: "util") pod "5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" (UID: "5519a2e2-0cf0-441e-b9ed-32b3daf16fc9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.309846 4909 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-util\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.309889 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vxs6\" (UniqueName: \"kubernetes.io/projected/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-kube-api-access-6vxs6\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.309905 4909 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5519a2e2-0cf0-441e-b9ed-32b3daf16fc9-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.833441 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" event={"ID":"5519a2e2-0cf0-441e-b9ed-32b3daf16fc9","Type":"ContainerDied","Data":"d1e808b0baeb806cf62e871a11c150973acf06ea15252c88feea2097705bb54f"} Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.833878 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e808b0baeb806cf62e871a11c150973acf06ea15252c88feea2097705bb54f" Nov 26 07:12:06 crc kubenswrapper[4909]: I1126 07:12:06.833551 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd" Nov 26 07:12:07 crc kubenswrapper[4909]: I1126 07:12:07.301459 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:12:07 crc kubenswrapper[4909]: I1126 07:12:07.301576 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.536242 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-dsr5w"] Nov 26 07:12:08 crc kubenswrapper[4909]: E1126 07:12:08.536423 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerName="pull" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.536436 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerName="pull" Nov 26 07:12:08 crc kubenswrapper[4909]: E1126 07:12:08.536458 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerName="util" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.536466 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerName="util" Nov 26 07:12:08 crc kubenswrapper[4909]: E1126 07:12:08.536476 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerName="extract" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.536483 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerName="extract" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.536567 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5519a2e2-0cf0-441e-b9ed-32b3daf16fc9" containerName="extract" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.536906 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.538259 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.540226 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.540234 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-p47qh" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.547111 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-dsr5w"] Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.654030 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4dgt\" (UniqueName: \"kubernetes.io/projected/14291eb4-4810-4cb7-ba01-f62943f69090-kube-api-access-m4dgt\") pod \"nmstate-operator-557fdffb88-dsr5w\" (UID: \"14291eb4-4810-4cb7-ba01-f62943f69090\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.755368 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4dgt\" (UniqueName: \"kubernetes.io/projected/14291eb4-4810-4cb7-ba01-f62943f69090-kube-api-access-m4dgt\") pod \"nmstate-operator-557fdffb88-dsr5w\" (UID: \"14291eb4-4810-4cb7-ba01-f62943f69090\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.775668 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4dgt\" (UniqueName: \"kubernetes.io/projected/14291eb4-4810-4cb7-ba01-f62943f69090-kube-api-access-m4dgt\") pod \"nmstate-operator-557fdffb88-dsr5w\" (UID: \"14291eb4-4810-4cb7-ba01-f62943f69090\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" Nov 26 07:12:08 crc kubenswrapper[4909]: I1126 07:12:08.854928 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" Nov 26 07:12:09 crc kubenswrapper[4909]: I1126 07:12:09.063035 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-dsr5w"] Nov 26 07:12:09 crc kubenswrapper[4909]: W1126 07:12:09.072936 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14291eb4_4810_4cb7_ba01_f62943f69090.slice/crio-fb926df1095bcc4aab1b9cbedcc71d5d975cac433189cce220f3a225a9e4a245 WatchSource:0}: Error finding container fb926df1095bcc4aab1b9cbedcc71d5d975cac433189cce220f3a225a9e4a245: Status 404 returned error can't find the container with id fb926df1095bcc4aab1b9cbedcc71d5d975cac433189cce220f3a225a9e4a245 Nov 26 07:12:09 crc kubenswrapper[4909]: I1126 07:12:09.854259 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" event={"ID":"14291eb4-4810-4cb7-ba01-f62943f69090","Type":"ContainerStarted","Data":"fb926df1095bcc4aab1b9cbedcc71d5d975cac433189cce220f3a225a9e4a245"} Nov 26 07:12:10 crc kubenswrapper[4909]: I1126 07:12:10.859927 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" event={"ID":"14291eb4-4810-4cb7-ba01-f62943f69090","Type":"ContainerStarted","Data":"c527ccbacd3d8afdeb25ff7981be26c3c4e7787ad8d075b79dba8e25455658e7"} Nov 26 07:12:10 crc kubenswrapper[4909]: I1126 07:12:10.879471 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-dsr5w" podStartSLOduration=1.255953922 podStartE2EDuration="2.879446593s" podCreationTimestamp="2025-11-26 07:12:08 +0000 UTC" firstStartedPulling="2025-11-26 07:12:09.07478518 +0000 UTC m=+701.220996356" lastFinishedPulling="2025-11-26 07:12:10.698277861 +0000 UTC m=+702.844489027" observedRunningTime="2025-11-26 07:12:10.875845557 +0000 UTC m=+703.022056723" watchObservedRunningTime="2025-11-26 07:12:10.879446593 +0000 UTC m=+703.025657769" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.725910 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws"] Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.726721 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.730615 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fb5c2" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.735663 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x"] Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.736436 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.738081 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.743607 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws"] Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.775369 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tbjjq"] Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.776071 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.785798 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x"] Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.850347 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv"] Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.851145 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.858147 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv"] Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.860678 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-mn5hl" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.860745 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.860746 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.917961 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2e1de2fd-7015-4de2-9689-d99deacc07b1-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-nd78x\" (UID: \"2e1de2fd-7015-4de2-9689-d99deacc07b1\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.918006 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lnr\" (UniqueName: \"kubernetes.io/projected/2e1de2fd-7015-4de2-9689-d99deacc07b1-kube-api-access-t8lnr\") pod \"nmstate-webhook-6b89b748d8-nd78x\" (UID: \"2e1de2fd-7015-4de2-9689-d99deacc07b1\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.918043 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-ovs-socket\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.918067 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7cw\" (UniqueName: \"kubernetes.io/projected/84a91ab3-ee60-44e7-ba77-837689cfd490-kube-api-access-bz7cw\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.918090 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-nmstate-lock\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.918106 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7mbd\" (UniqueName: \"kubernetes.io/projected/c5d1ea9d-2001-418f-9b98-41cf8256a723-kube-api-access-f7mbd\") pod \"nmstate-metrics-5dcf9c57c5-q29ws\" (UID: \"c5d1ea9d-2001-418f-9b98-41cf8256a723\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" Nov 26 07:12:11 crc kubenswrapper[4909]: I1126 07:12:11.918133 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-dbus-socket\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.016634 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85fb75c9-qf5d8"] Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.017518 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.018813 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7cw\" (UniqueName: \"kubernetes.io/projected/84a91ab3-ee60-44e7-ba77-837689cfd490-kube-api-access-bz7cw\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.018878 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-nmstate-lock\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.018910 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7mbd\" (UniqueName: \"kubernetes.io/projected/c5d1ea9d-2001-418f-9b98-41cf8256a723-kube-api-access-f7mbd\") pod \"nmstate-metrics-5dcf9c57c5-q29ws\" (UID: \"c5d1ea9d-2001-418f-9b98-41cf8256a723\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.018944 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f022f0f-6f02-4652-8f76-44d162f8db2d-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.018964 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-dbus-socket\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.018991 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr87h\" (UniqueName: \"kubernetes.io/projected/7f022f0f-6f02-4652-8f76-44d162f8db2d-kube-api-access-jr87h\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.019024 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2e1de2fd-7015-4de2-9689-d99deacc07b1-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-nd78x\" (UID: \"2e1de2fd-7015-4de2-9689-d99deacc07b1\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.019045 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7f022f0f-6f02-4652-8f76-44d162f8db2d-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.019077 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8lnr\" (UniqueName: \"kubernetes.io/projected/2e1de2fd-7015-4de2-9689-d99deacc07b1-kube-api-access-t8lnr\") pod \"nmstate-webhook-6b89b748d8-nd78x\" (UID: \"2e1de2fd-7015-4de2-9689-d99deacc07b1\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.019122 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-ovs-socket\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.019189 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-ovs-socket\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.019331 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-dbus-socket\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.019394 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/84a91ab3-ee60-44e7-ba77-837689cfd490-nmstate-lock\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.033347 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85fb75c9-qf5d8"] Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.039776 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2e1de2fd-7015-4de2-9689-d99deacc07b1-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-nd78x\" (UID: \"2e1de2fd-7015-4de2-9689-d99deacc07b1\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.047992 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8lnr\" (UniqueName: \"kubernetes.io/projected/2e1de2fd-7015-4de2-9689-d99deacc07b1-kube-api-access-t8lnr\") pod \"nmstate-webhook-6b89b748d8-nd78x\" (UID: \"2e1de2fd-7015-4de2-9689-d99deacc07b1\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.048234 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7cw\" (UniqueName: \"kubernetes.io/projected/84a91ab3-ee60-44e7-ba77-837689cfd490-kube-api-access-bz7cw\") pod \"nmstate-handler-tbjjq\" (UID: \"84a91ab3-ee60-44e7-ba77-837689cfd490\") " pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.052947 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7mbd\" (UniqueName: \"kubernetes.io/projected/c5d1ea9d-2001-418f-9b98-41cf8256a723-kube-api-access-f7mbd\") pod \"nmstate-metrics-5dcf9c57c5-q29ws\" (UID: \"c5d1ea9d-2001-418f-9b98-41cf8256a723\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.067567 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.093974 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:12 crc kubenswrapper[4909]: W1126 07:12:12.114848 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a91ab3_ee60_44e7_ba77_837689cfd490.slice/crio-858ad706ac11f185cfcdb54c76686b6659f0529c38e5241015b4689e5d2525a1 WatchSource:0}: Error finding container 858ad706ac11f185cfcdb54c76686b6659f0529c38e5241015b4689e5d2525a1: Status 404 returned error can't find the container with id 858ad706ac11f185cfcdb54c76686b6659f0529c38e5241015b4689e5d2525a1 Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120075 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7f022f0f-6f02-4652-8f76-44d162f8db2d-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120122 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-serving-cert\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120145 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-trusted-ca-bundle\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120162 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-config\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120212 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-service-ca\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120229 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6fk\" (UniqueName: \"kubernetes.io/projected/a94340bd-5f8f-42e3-84af-b189a7d479c8-kube-api-access-th6fk\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120245 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-oauth-config\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120267 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f022f0f-6f02-4652-8f76-44d162f8db2d-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120289 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr87h\" (UniqueName: \"kubernetes.io/projected/7f022f0f-6f02-4652-8f76-44d162f8db2d-kube-api-access-jr87h\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.120303 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-oauth-serving-cert\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.121547 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7f022f0f-6f02-4652-8f76-44d162f8db2d-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: E1126 07:12:12.121678 4909 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 26 07:12:12 crc kubenswrapper[4909]: E1126 07:12:12.121720 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f022f0f-6f02-4652-8f76-44d162f8db2d-plugin-serving-cert podName:7f022f0f-6f02-4652-8f76-44d162f8db2d nodeName:}" failed. No retries permitted until 2025-11-26 07:12:12.621704903 +0000 UTC m=+704.767916069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/7f022f0f-6f02-4652-8f76-44d162f8db2d-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-ntngv" (UID: "7f022f0f-6f02-4652-8f76-44d162f8db2d") : secret "plugin-serving-cert" not found Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.138240 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr87h\" (UniqueName: \"kubernetes.io/projected/7f022f0f-6f02-4652-8f76-44d162f8db2d-kube-api-access-jr87h\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.222179 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6fk\" (UniqueName: \"kubernetes.io/projected/a94340bd-5f8f-42e3-84af-b189a7d479c8-kube-api-access-th6fk\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.222420 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-service-ca\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.222438 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-oauth-config\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.222483 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-oauth-serving-cert\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.222523 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-serving-cert\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.222539 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-trusted-ca-bundle\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.222557 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-config\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.223283 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-config\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.223902 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-service-ca\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.223977 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-oauth-serving-cert\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.225074 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a94340bd-5f8f-42e3-84af-b189a7d479c8-trusted-ca-bundle\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.230633 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-oauth-config\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.231208 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a94340bd-5f8f-42e3-84af-b189a7d479c8-console-serving-cert\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.242384 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6fk\" (UniqueName: \"kubernetes.io/projected/a94340bd-5f8f-42e3-84af-b189a7d479c8-kube-api-access-th6fk\") pod \"console-85fb75c9-qf5d8\" (UID: \"a94340bd-5f8f-42e3-84af-b189a7d479c8\") " pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.269184 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x"] Nov 26 07:12:12 crc kubenswrapper[4909]: W1126 07:12:12.273396 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e1de2fd_7015_4de2_9689_d99deacc07b1.slice/crio-73fdec058dac624b85dcd44bca92015d25ac710cfe56c4df6dc49977c47b45f9 WatchSource:0}: Error finding container 73fdec058dac624b85dcd44bca92015d25ac710cfe56c4df6dc49977c47b45f9: Status 404 returned error can't find the container with id 73fdec058dac624b85dcd44bca92015d25ac710cfe56c4df6dc49977c47b45f9 Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.350752 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.434117 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.581177 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws"] Nov 26 07:12:12 crc kubenswrapper[4909]: W1126 07:12:12.588845 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5d1ea9d_2001_418f_9b98_41cf8256a723.slice/crio-bf319c085700e4b662b5095135934e2639f7440ed50dc35bd70177118a03e1fc WatchSource:0}: Error finding container bf319c085700e4b662b5095135934e2639f7440ed50dc35bd70177118a03e1fc: Status 404 returned error can't find the container with id bf319c085700e4b662b5095135934e2639f7440ed50dc35bd70177118a03e1fc Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.628982 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f022f0f-6f02-4652-8f76-44d162f8db2d-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.632997 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f022f0f-6f02-4652-8f76-44d162f8db2d-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ntngv\" (UID: \"7f022f0f-6f02-4652-8f76-44d162f8db2d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.652723 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85fb75c9-qf5d8"] Nov 26 07:12:12 crc kubenswrapper[4909]: W1126 07:12:12.659520 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94340bd_5f8f_42e3_84af_b189a7d479c8.slice/crio-15b6c955c5655d22106f8d23a69b320063bf1fc34672ced875b2ed3b675527e7 WatchSource:0}: Error finding container 15b6c955c5655d22106f8d23a69b320063bf1fc34672ced875b2ed3b675527e7: Status 404 returned error can't find the container with id 15b6c955c5655d22106f8d23a69b320063bf1fc34672ced875b2ed3b675527e7 Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.765392 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.882076 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85fb75c9-qf5d8" event={"ID":"a94340bd-5f8f-42e3-84af-b189a7d479c8","Type":"ContainerStarted","Data":"61d95e502557511f228ca8771be0e0096e203f8d80856b29fd5dcc9e3d73fcfd"} Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.882125 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85fb75c9-qf5d8" event={"ID":"a94340bd-5f8f-42e3-84af-b189a7d479c8","Type":"ContainerStarted","Data":"15b6c955c5655d22106f8d23a69b320063bf1fc34672ced875b2ed3b675527e7"} Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.888256 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" event={"ID":"c5d1ea9d-2001-418f-9b98-41cf8256a723","Type":"ContainerStarted","Data":"bf319c085700e4b662b5095135934e2639f7440ed50dc35bd70177118a03e1fc"} Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.889180 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" event={"ID":"2e1de2fd-7015-4de2-9689-d99deacc07b1","Type":"ContainerStarted","Data":"73fdec058dac624b85dcd44bca92015d25ac710cfe56c4df6dc49977c47b45f9"} Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.889870 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tbjjq" event={"ID":"84a91ab3-ee60-44e7-ba77-837689cfd490","Type":"ContainerStarted","Data":"858ad706ac11f185cfcdb54c76686b6659f0529c38e5241015b4689e5d2525a1"} Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.900941 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85fb75c9-qf5d8" podStartSLOduration=1.900924233 podStartE2EDuration="1.900924233s" podCreationTimestamp="2025-11-26 07:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:12:12.900063049 +0000 UTC m=+705.046274225" watchObservedRunningTime="2025-11-26 07:12:12.900924233 +0000 UTC m=+705.047135399" Nov 26 07:12:12 crc kubenswrapper[4909]: I1126 07:12:12.971746 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv"] Nov 26 07:12:12 crc kubenswrapper[4909]: W1126 07:12:12.979773 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f022f0f_6f02_4652_8f76_44d162f8db2d.slice/crio-afa1dfab2578f1ce3e5099ddff0e6f6eb9e88ce6c17055c4ef09c24243776687 WatchSource:0}: Error finding container afa1dfab2578f1ce3e5099ddff0e6f6eb9e88ce6c17055c4ef09c24243776687: Status 404 returned error can't find the container with id afa1dfab2578f1ce3e5099ddff0e6f6eb9e88ce6c17055c4ef09c24243776687 Nov 26 07:12:13 crc kubenswrapper[4909]: I1126 07:12:13.900558 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" event={"ID":"7f022f0f-6f02-4652-8f76-44d162f8db2d","Type":"ContainerStarted","Data":"afa1dfab2578f1ce3e5099ddff0e6f6eb9e88ce6c17055c4ef09c24243776687"} Nov 26 07:12:14 crc kubenswrapper[4909]: I1126 07:12:14.906556 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" event={"ID":"c5d1ea9d-2001-418f-9b98-41cf8256a723","Type":"ContainerStarted","Data":"c6fb5a5166e7f1954d2b3fbf10058f3d58ccfaac64003334d5b41f814e52f977"} Nov 26 07:12:14 crc kubenswrapper[4909]: I1126 07:12:14.907663 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" event={"ID":"2e1de2fd-7015-4de2-9689-d99deacc07b1","Type":"ContainerStarted","Data":"4cee7aeddab0d090001f71b13f78c991356d37c7264f1df941f3633aa447ba78"} Nov 26 07:12:14 crc kubenswrapper[4909]: I1126 07:12:14.908285 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:14 crc kubenswrapper[4909]: I1126 07:12:14.909173 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tbjjq" event={"ID":"84a91ab3-ee60-44e7-ba77-837689cfd490","Type":"ContainerStarted","Data":"41afd49d501f9c92c95077e18b812cf0e44b1eeb2c53e51fcc0b34ed22d15466"} Nov 26 07:12:14 crc kubenswrapper[4909]: I1126 07:12:14.909750 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:14 crc kubenswrapper[4909]: I1126 07:12:14.922529 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" podStartSLOduration=1.690677712 podStartE2EDuration="3.922514565s" podCreationTimestamp="2025-11-26 07:12:11 +0000 UTC" firstStartedPulling="2025-11-26 07:12:12.276182441 +0000 UTC m=+704.422393617" lastFinishedPulling="2025-11-26 07:12:14.508019304 +0000 UTC m=+706.654230470" observedRunningTime="2025-11-26 07:12:14.921074187 +0000 UTC m=+707.067285363" watchObservedRunningTime="2025-11-26 07:12:14.922514565 +0000 UTC m=+707.068725731" Nov 26 07:12:14 crc kubenswrapper[4909]: I1126 07:12:14.942980 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tbjjq" podStartSLOduration=1.592557104 podStartE2EDuration="3.942964954s" podCreationTimestamp="2025-11-26 07:12:11 +0000 UTC" firstStartedPulling="2025-11-26 07:12:12.116835223 +0000 UTC m=+704.263046389" lastFinishedPulling="2025-11-26 07:12:14.467243073 +0000 UTC m=+706.613454239" observedRunningTime="2025-11-26 07:12:14.942167262 +0000 UTC m=+707.088378428" watchObservedRunningTime="2025-11-26 07:12:14.942964954 +0000 UTC m=+707.089176120" Nov 26 07:12:15 crc kubenswrapper[4909]: I1126 07:12:15.920397 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" event={"ID":"7f022f0f-6f02-4652-8f76-44d162f8db2d","Type":"ContainerStarted","Data":"259c16f44eec9f881aeb609f2651927f97cf961ccb3159d8a479d12922e94e3e"} Nov 26 07:12:15 crc kubenswrapper[4909]: I1126 07:12:15.937678 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ntngv" podStartSLOduration=2.51386135 podStartE2EDuration="4.937656944s" podCreationTimestamp="2025-11-26 07:12:11 +0000 UTC" firstStartedPulling="2025-11-26 07:12:12.982032396 +0000 UTC m=+705.128243562" lastFinishedPulling="2025-11-26 07:12:15.40582795 +0000 UTC m=+707.552039156" observedRunningTime="2025-11-26 07:12:15.934136639 +0000 UTC m=+708.080347815" watchObservedRunningTime="2025-11-26 07:12:15.937656944 +0000 UTC m=+708.083868100" Nov 26 07:12:16 crc kubenswrapper[4909]: I1126 07:12:16.927169 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" event={"ID":"c5d1ea9d-2001-418f-9b98-41cf8256a723","Type":"ContainerStarted","Data":"93c098133c949e36a333a13a3c41d4b062156a1191b5787ef4f0483077dc244c"} Nov 26 07:12:16 crc kubenswrapper[4909]: I1126 07:12:16.953677 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-q29ws" podStartSLOduration=2.002740779 podStartE2EDuration="5.953649564s" podCreationTimestamp="2025-11-26 07:12:11 +0000 UTC" firstStartedPulling="2025-11-26 07:12:12.59167005 +0000 UTC m=+704.737881216" lastFinishedPulling="2025-11-26 07:12:16.542578835 +0000 UTC m=+708.688790001" observedRunningTime="2025-11-26 07:12:16.950680014 +0000 UTC m=+709.096891210" watchObservedRunningTime="2025-11-26 07:12:16.953649564 +0000 UTC m=+709.099860780" Nov 26 07:12:22 crc kubenswrapper[4909]: I1126 07:12:22.119022 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tbjjq" Nov 26 07:12:22 crc kubenswrapper[4909]: I1126 07:12:22.435047 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:22 crc kubenswrapper[4909]: I1126 07:12:22.435118 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:22 crc kubenswrapper[4909]: I1126 07:12:22.439335 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:22 crc kubenswrapper[4909]: I1126 07:12:22.971774 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85fb75c9-qf5d8" Nov 26 07:12:23 crc kubenswrapper[4909]: I1126 07:12:23.024094 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f7bmk"] Nov 26 07:12:32 crc kubenswrapper[4909]: I1126 07:12:32.075272 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-nd78x" Nov 26 07:12:37 crc kubenswrapper[4909]: I1126 07:12:37.301267 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:12:37 crc kubenswrapper[4909]: I1126 07:12:37.301961 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.647188 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q"] Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.648708 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.650393 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.655652 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q"] Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.765997 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.766044 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.766080 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrw4p\" (UniqueName: \"kubernetes.io/projected/1e954cbc-c96d-4655-9098-340b6a9452d6-kube-api-access-jrw4p\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.867028 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.867092 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrw4p\" (UniqueName: \"kubernetes.io/projected/1e954cbc-c96d-4655-9098-340b6a9452d6-kube-api-access-jrw4p\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.867168 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.867671 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.867723 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.885347 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrw4p\" (UniqueName: \"kubernetes.io/projected/1e954cbc-c96d-4655-9098-340b6a9452d6-kube-api-access-jrw4p\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:45 crc kubenswrapper[4909]: I1126 07:12:45.963504 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:46 crc kubenswrapper[4909]: I1126 07:12:46.367398 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q"] Nov 26 07:12:47 crc kubenswrapper[4909]: I1126 07:12:47.110493 4909 generic.go:334] "Generic (PLEG): container finished" podID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerID="e701ecb7011a3523ff75f62d4e71118d3ea7d8fe11d88e929cc575211bcc9e5d" exitCode=0 Nov 26 07:12:47 crc kubenswrapper[4909]: I1126 07:12:47.110526 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" event={"ID":"1e954cbc-c96d-4655-9098-340b6a9452d6","Type":"ContainerDied","Data":"e701ecb7011a3523ff75f62d4e71118d3ea7d8fe11d88e929cc575211bcc9e5d"} Nov 26 07:12:47 crc kubenswrapper[4909]: I1126 07:12:47.110564 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" event={"ID":"1e954cbc-c96d-4655-9098-340b6a9452d6","Type":"ContainerStarted","Data":"802b4707e4aa75b1f840d539777f529e1fa648b69d8b1c7540bec035761f0ae8"} Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.070884 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-f7bmk" podUID="32133cc3-d6eb-48c5-a3fc-11e820ed8a48" containerName="console" containerID="cri-o://01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c" gracePeriod=15 Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.481749 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f7bmk_32133cc3-d6eb-48c5-a3fc-11e820ed8a48/console/0.log" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.482045 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.600563 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-service-ca\") pod \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.600655 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-config\") pod \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.600715 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-serving-cert\") pod \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.600822 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnkf2\" (UniqueName: \"kubernetes.io/projected/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-kube-api-access-cnkf2\") pod \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.600862 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-oauth-config\") pod \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.600935 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-trusted-ca-bundle\") pod \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.600990 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-oauth-serving-cert\") pod \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\" (UID: \"32133cc3-d6eb-48c5-a3fc-11e820ed8a48\") " Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.602584 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-service-ca" (OuterVolumeSpecName: "service-ca") pod "32133cc3-d6eb-48c5-a3fc-11e820ed8a48" (UID: "32133cc3-d6eb-48c5-a3fc-11e820ed8a48"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.603354 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "32133cc3-d6eb-48c5-a3fc-11e820ed8a48" (UID: "32133cc3-d6eb-48c5-a3fc-11e820ed8a48"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.603697 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "32133cc3-d6eb-48c5-a3fc-11e820ed8a48" (UID: "32133cc3-d6eb-48c5-a3fc-11e820ed8a48"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.603734 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-config" (OuterVolumeSpecName: "console-config") pod "32133cc3-d6eb-48c5-a3fc-11e820ed8a48" (UID: "32133cc3-d6eb-48c5-a3fc-11e820ed8a48"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.608001 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-kube-api-access-cnkf2" (OuterVolumeSpecName: "kube-api-access-cnkf2") pod "32133cc3-d6eb-48c5-a3fc-11e820ed8a48" (UID: "32133cc3-d6eb-48c5-a3fc-11e820ed8a48"). InnerVolumeSpecName "kube-api-access-cnkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.609844 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "32133cc3-d6eb-48c5-a3fc-11e820ed8a48" (UID: "32133cc3-d6eb-48c5-a3fc-11e820ed8a48"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.610013 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "32133cc3-d6eb-48c5-a3fc-11e820ed8a48" (UID: "32133cc3-d6eb-48c5-a3fc-11e820ed8a48"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.703132 4909 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.703172 4909 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.703182 4909 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.703193 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnkf2\" (UniqueName: \"kubernetes.io/projected/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-kube-api-access-cnkf2\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.703203 4909 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.703211 4909 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:48 crc kubenswrapper[4909]: I1126 07:12:48.703220 4909 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32133cc3-d6eb-48c5-a3fc-11e820ed8a48-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.127534 4909 generic.go:334] "Generic (PLEG): container finished" podID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerID="7dcefc1a0e855bc8cdc2f4957c9e6329f36f5ff6813e8b2761ea76bb22110f7f" exitCode=0 Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.127633 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" event={"ID":"1e954cbc-c96d-4655-9098-340b6a9452d6","Type":"ContainerDied","Data":"7dcefc1a0e855bc8cdc2f4957c9e6329f36f5ff6813e8b2761ea76bb22110f7f"} Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.130861 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f7bmk_32133cc3-d6eb-48c5-a3fc-11e820ed8a48/console/0.log" Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.130911 4909 generic.go:334] "Generic (PLEG): container finished" podID="32133cc3-d6eb-48c5-a3fc-11e820ed8a48" containerID="01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c" exitCode=2 Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.130938 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f7bmk" event={"ID":"32133cc3-d6eb-48c5-a3fc-11e820ed8a48","Type":"ContainerDied","Data":"01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c"} Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.130961 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f7bmk" event={"ID":"32133cc3-d6eb-48c5-a3fc-11e820ed8a48","Type":"ContainerDied","Data":"ce69f607e431e2d95f65518965c5951a85dbaf01cd26c7e35a0121e2af38d286"} Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.130969 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f7bmk" Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.130983 4909 scope.go:117] "RemoveContainer" containerID="01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c" Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.163624 4909 scope.go:117] "RemoveContainer" containerID="01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c" Nov 26 07:12:49 crc kubenswrapper[4909]: E1126 07:12:49.164418 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c\": container with ID starting with 01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c not found: ID does not exist" containerID="01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c" Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.164465 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c"} err="failed to get container status \"01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c\": rpc error: code = NotFound desc = could not find container \"01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c\": container with ID starting with 01ca8299b37f5ce0224c2bd97e551b2797f4ce2ab3f3690bfda5f763cad2925c not found: ID does not exist" Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.172557 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f7bmk"] Nov 26 07:12:49 crc kubenswrapper[4909]: I1126 07:12:49.176810 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-f7bmk"] Nov 26 07:12:50 crc kubenswrapper[4909]: I1126 07:12:50.147399 4909 generic.go:334] "Generic (PLEG): container finished" podID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerID="568e394d5ed5302d0119bc8b0f39eb3fcb958298b95ec825e8b68076c45b6542" exitCode=0 Nov 26 07:12:50 crc kubenswrapper[4909]: I1126 07:12:50.147489 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" event={"ID":"1e954cbc-c96d-4655-9098-340b6a9452d6","Type":"ContainerDied","Data":"568e394d5ed5302d0119bc8b0f39eb3fcb958298b95ec825e8b68076c45b6542"} Nov 26 07:12:50 crc kubenswrapper[4909]: I1126 07:12:50.513014 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32133cc3-d6eb-48c5-a3fc-11e820ed8a48" path="/var/lib/kubelet/pods/32133cc3-d6eb-48c5-a3fc-11e820ed8a48/volumes" Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.436883 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.540813 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrw4p\" (UniqueName: \"kubernetes.io/projected/1e954cbc-c96d-4655-9098-340b6a9452d6-kube-api-access-jrw4p\") pod \"1e954cbc-c96d-4655-9098-340b6a9452d6\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.540878 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-util\") pod \"1e954cbc-c96d-4655-9098-340b6a9452d6\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.540985 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-bundle\") pod \"1e954cbc-c96d-4655-9098-340b6a9452d6\" (UID: \"1e954cbc-c96d-4655-9098-340b6a9452d6\") " Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.541948 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-bundle" (OuterVolumeSpecName: "bundle") pod "1e954cbc-c96d-4655-9098-340b6a9452d6" (UID: "1e954cbc-c96d-4655-9098-340b6a9452d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.545008 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e954cbc-c96d-4655-9098-340b6a9452d6-kube-api-access-jrw4p" (OuterVolumeSpecName: "kube-api-access-jrw4p") pod "1e954cbc-c96d-4655-9098-340b6a9452d6" (UID: "1e954cbc-c96d-4655-9098-340b6a9452d6"). InnerVolumeSpecName "kube-api-access-jrw4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.560242 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-util" (OuterVolumeSpecName: "util") pod "1e954cbc-c96d-4655-9098-340b6a9452d6" (UID: "1e954cbc-c96d-4655-9098-340b6a9452d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.642028 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrw4p\" (UniqueName: \"kubernetes.io/projected/1e954cbc-c96d-4655-9098-340b6a9452d6-kube-api-access-jrw4p\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.642060 4909 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-util\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:51 crc kubenswrapper[4909]: I1126 07:12:51.642069 4909 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e954cbc-c96d-4655-9098-340b6a9452d6-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:12:52 crc kubenswrapper[4909]: I1126 07:12:52.165414 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" event={"ID":"1e954cbc-c96d-4655-9098-340b6a9452d6","Type":"ContainerDied","Data":"802b4707e4aa75b1f840d539777f529e1fa648b69d8b1c7540bec035761f0ae8"} Nov 26 07:12:52 crc kubenswrapper[4909]: I1126 07:12:52.165477 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q" Nov 26 07:12:52 crc kubenswrapper[4909]: I1126 07:12:52.165489 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802b4707e4aa75b1f840d539777f529e1fa648b69d8b1c7540bec035761f0ae8" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.156412 4909 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.289483 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2"] Nov 26 07:13:02 crc kubenswrapper[4909]: E1126 07:13:02.289768 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerName="pull" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.289785 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerName="pull" Nov 26 07:13:02 crc kubenswrapper[4909]: E1126 07:13:02.289797 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerName="util" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.289807 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerName="util" Nov 26 07:13:02 crc kubenswrapper[4909]: E1126 07:13:02.289829 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32133cc3-d6eb-48c5-a3fc-11e820ed8a48" containerName="console" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.289838 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="32133cc3-d6eb-48c5-a3fc-11e820ed8a48" containerName="console" Nov 26 07:13:02 crc kubenswrapper[4909]: E1126 07:13:02.289856 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerName="extract" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.289864 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerName="extract" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.289980 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="32133cc3-d6eb-48c5-a3fc-11e820ed8a48" containerName="console" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.289996 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e954cbc-c96d-4655-9098-340b6a9452d6" containerName="extract" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.290479 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.294286 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.294323 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.294923 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zhvxq" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.295702 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.296504 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.323920 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2"] Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.367720 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ace07e4-e65b-451c-8623-f71b4f7d4f14-webhook-cert\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.367780 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9msr\" (UniqueName: \"kubernetes.io/projected/8ace07e4-e65b-451c-8623-f71b4f7d4f14-kube-api-access-c9msr\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.367821 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ace07e4-e65b-451c-8623-f71b4f7d4f14-apiservice-cert\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.469056 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ace07e4-e65b-451c-8623-f71b4f7d4f14-webhook-cert\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.469114 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9msr\" (UniqueName: \"kubernetes.io/projected/8ace07e4-e65b-451c-8623-f71b4f7d4f14-kube-api-access-c9msr\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.469148 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ace07e4-e65b-451c-8623-f71b4f7d4f14-apiservice-cert\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.474329 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ace07e4-e65b-451c-8623-f71b4f7d4f14-apiservice-cert\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.487138 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ace07e4-e65b-451c-8623-f71b4f7d4f14-webhook-cert\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.489428 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9msr\" (UniqueName: \"kubernetes.io/projected/8ace07e4-e65b-451c-8623-f71b4f7d4f14-kube-api-access-c9msr\") pod \"metallb-operator-controller-manager-58dcdd989d-ctkx2\" (UID: \"8ace07e4-e65b-451c-8623-f71b4f7d4f14\") " pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.542204 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-76556765bb-nprm5"] Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.542814 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.547324 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.547340 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.547770 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dj7bc" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.564106 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76556765bb-nprm5"] Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.605912 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.670968 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-apiservice-cert\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.671052 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-webhook-cert\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.671081 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl48j\" (UniqueName: \"kubernetes.io/projected/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-kube-api-access-sl48j\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.772692 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-webhook-cert\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.773023 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl48j\" (UniqueName: \"kubernetes.io/projected/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-kube-api-access-sl48j\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.773081 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-apiservice-cert\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.791076 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-apiservice-cert\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.791191 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-webhook-cert\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.794353 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl48j\" (UniqueName: \"kubernetes.io/projected/96f75acf-1983-407f-a5dc-cfcb53dc9dc7-kube-api-access-sl48j\") pod \"metallb-operator-webhook-server-76556765bb-nprm5\" (UID: \"96f75acf-1983-407f-a5dc-cfcb53dc9dc7\") " pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.806041 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2"] Nov 26 07:13:02 crc kubenswrapper[4909]: I1126 07:13:02.855233 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:03 crc kubenswrapper[4909]: I1126 07:13:03.086753 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76556765bb-nprm5"] Nov 26 07:13:03 crc kubenswrapper[4909]: I1126 07:13:03.219329 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" event={"ID":"96f75acf-1983-407f-a5dc-cfcb53dc9dc7","Type":"ContainerStarted","Data":"8a8082cd0a2d6528dfffa3d11abed8b1c822e7a3a5fb9c00af9240b545708139"} Nov 26 07:13:03 crc kubenswrapper[4909]: I1126 07:13:03.220446 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" event={"ID":"8ace07e4-e65b-451c-8623-f71b4f7d4f14","Type":"ContainerStarted","Data":"551da8c8f3fa339e0cd0cb323e38ced4fa55849ecaaa368d0571e97951e1b50e"} Nov 26 07:13:06 crc kubenswrapper[4909]: I1126 07:13:06.243836 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" event={"ID":"8ace07e4-e65b-451c-8623-f71b4f7d4f14","Type":"ContainerStarted","Data":"7f9df10f4906ec056b4ebd72b47a41386a1efb2995578f479966635a3c32ee18"} Nov 26 07:13:06 crc kubenswrapper[4909]: I1126 07:13:06.244167 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:06 crc kubenswrapper[4909]: I1126 07:13:06.260908 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" podStartSLOduration=1.942303211 podStartE2EDuration="4.260889079s" podCreationTimestamp="2025-11-26 07:13:02 +0000 UTC" firstStartedPulling="2025-11-26 07:13:02.820415535 +0000 UTC m=+754.966626701" lastFinishedPulling="2025-11-26 07:13:05.139001403 +0000 UTC m=+757.285212569" observedRunningTime="2025-11-26 07:13:06.257849918 +0000 UTC m=+758.404061094" watchObservedRunningTime="2025-11-26 07:13:06.260889079 +0000 UTC m=+758.407100245" Nov 26 07:13:07 crc kubenswrapper[4909]: I1126 07:13:07.301561 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:13:07 crc kubenswrapper[4909]: I1126 07:13:07.301648 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:13:07 crc kubenswrapper[4909]: I1126 07:13:07.301702 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:13:07 crc kubenswrapper[4909]: I1126 07:13:07.302330 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4c41dceb1d36b1bd9dc641bc254e711b1901f8039253b64ac1c47b6d8051be7"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:13:07 crc kubenswrapper[4909]: I1126 07:13:07.302389 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://d4c41dceb1d36b1bd9dc641bc254e711b1901f8039253b64ac1c47b6d8051be7" gracePeriod=600 Nov 26 07:13:08 crc kubenswrapper[4909]: I1126 07:13:08.255330 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" event={"ID":"96f75acf-1983-407f-a5dc-cfcb53dc9dc7","Type":"ContainerStarted","Data":"5f46fd3b618e0d28fb45f3d8f8ef54aa49327245680f2c29a7870e137a123ac9"} Nov 26 07:13:08 crc kubenswrapper[4909]: I1126 07:13:08.255706 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:08 crc kubenswrapper[4909]: I1126 07:13:08.259177 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="d4c41dceb1d36b1bd9dc641bc254e711b1901f8039253b64ac1c47b6d8051be7" exitCode=0 Nov 26 07:13:08 crc kubenswrapper[4909]: I1126 07:13:08.259238 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"d4c41dceb1d36b1bd9dc641bc254e711b1901f8039253b64ac1c47b6d8051be7"} Nov 26 07:13:08 crc kubenswrapper[4909]: I1126 07:13:08.259287 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"e56f7341ca39cd31863ba15982a9b0b7165f8ceb520eefc8a8ee6734e9f64390"} Nov 26 07:13:08 crc kubenswrapper[4909]: I1126 07:13:08.259309 4909 scope.go:117] "RemoveContainer" containerID="078c5f364f15712f7c294800057e109895b88211acfc083adc8b6dc0d2e41112" Nov 26 07:13:08 crc kubenswrapper[4909]: I1126 07:13:08.277355 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" podStartSLOduration=1.827715473 podStartE2EDuration="6.277329333s" podCreationTimestamp="2025-11-26 07:13:02 +0000 UTC" firstStartedPulling="2025-11-26 07:13:03.09689826 +0000 UTC m=+755.243109426" lastFinishedPulling="2025-11-26 07:13:07.54651212 +0000 UTC m=+759.692723286" observedRunningTime="2025-11-26 07:13:08.273188002 +0000 UTC m=+760.419399188" watchObservedRunningTime="2025-11-26 07:13:08.277329333 +0000 UTC m=+760.423540499" Nov 26 07:13:22 crc kubenswrapper[4909]: I1126 07:13:22.866444 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-76556765bb-nprm5" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.056703 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c4np8"] Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.058134 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.066423 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c4np8"] Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.137517 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-catalog-content\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.137604 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bclq\" (UniqueName: \"kubernetes.io/projected/e317163e-225f-461a-9c20-d783ad7a13cc-kube-api-access-4bclq\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.137663 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-utilities\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.239108 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-utilities\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.239174 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-catalog-content\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.239213 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bclq\" (UniqueName: \"kubernetes.io/projected/e317163e-225f-461a-9c20-d783ad7a13cc-kube-api-access-4bclq\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.239630 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-utilities\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.239817 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-catalog-content\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.270113 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bclq\" (UniqueName: \"kubernetes.io/projected/e317163e-225f-461a-9c20-d783ad7a13cc-kube-api-access-4bclq\") pod \"redhat-operators-c4np8\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.372546 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:23 crc kubenswrapper[4909]: I1126 07:13:23.874422 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c4np8"] Nov 26 07:13:24 crc kubenswrapper[4909]: I1126 07:13:24.340695 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4np8" event={"ID":"e317163e-225f-461a-9c20-d783ad7a13cc","Type":"ContainerStarted","Data":"1c49afd5210ba651da76153910beeca85ca0c530d2c3097a4b845857f33e18bd"} Nov 26 07:13:25 crc kubenswrapper[4909]: I1126 07:13:25.347515 4909 generic.go:334] "Generic (PLEG): container finished" podID="e317163e-225f-461a-9c20-d783ad7a13cc" containerID="22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e" exitCode=0 Nov 26 07:13:25 crc kubenswrapper[4909]: I1126 07:13:25.347568 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4np8" event={"ID":"e317163e-225f-461a-9c20-d783ad7a13cc","Type":"ContainerDied","Data":"22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e"} Nov 26 07:13:26 crc kubenswrapper[4909]: I1126 07:13:26.354906 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4np8" event={"ID":"e317163e-225f-461a-9c20-d783ad7a13cc","Type":"ContainerStarted","Data":"23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c"} Nov 26 07:13:27 crc kubenswrapper[4909]: I1126 07:13:27.365729 4909 generic.go:334] "Generic (PLEG): container finished" podID="e317163e-225f-461a-9c20-d783ad7a13cc" containerID="23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c" exitCode=0 Nov 26 07:13:27 crc kubenswrapper[4909]: I1126 07:13:27.365815 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4np8" event={"ID":"e317163e-225f-461a-9c20-d783ad7a13cc","Type":"ContainerDied","Data":"23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c"} Nov 26 07:13:28 crc kubenswrapper[4909]: I1126 07:13:28.373446 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4np8" event={"ID":"e317163e-225f-461a-9c20-d783ad7a13cc","Type":"ContainerStarted","Data":"5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b"} Nov 26 07:13:28 crc kubenswrapper[4909]: I1126 07:13:28.397815 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c4np8" podStartSLOduration=2.926342633 podStartE2EDuration="5.397784572s" podCreationTimestamp="2025-11-26 07:13:23 +0000 UTC" firstStartedPulling="2025-11-26 07:13:25.349325177 +0000 UTC m=+777.495536343" lastFinishedPulling="2025-11-26 07:13:27.820767096 +0000 UTC m=+779.966978282" observedRunningTime="2025-11-26 07:13:28.394058201 +0000 UTC m=+780.540269417" watchObservedRunningTime="2025-11-26 07:13:28.397784572 +0000 UTC m=+780.543995778" Nov 26 07:13:31 crc kubenswrapper[4909]: I1126 07:13:31.842163 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zrclp"] Nov 26 07:13:31 crc kubenswrapper[4909]: I1126 07:13:31.845967 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:31 crc kubenswrapper[4909]: I1126 07:13:31.862674 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrclp"] Nov 26 07:13:31 crc kubenswrapper[4909]: I1126 07:13:31.968151 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-utilities\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:31 crc kubenswrapper[4909]: I1126 07:13:31.968200 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpknd\" (UniqueName: \"kubernetes.io/projected/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-kube-api-access-gpknd\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:31 crc kubenswrapper[4909]: I1126 07:13:31.968284 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-catalog-content\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.069115 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-utilities\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.069515 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpknd\" (UniqueName: \"kubernetes.io/projected/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-kube-api-access-gpknd\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.069580 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-catalog-content\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.070072 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-catalog-content\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.070729 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-utilities\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.091610 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpknd\" (UniqueName: \"kubernetes.io/projected/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-kube-api-access-gpknd\") pod \"redhat-marketplace-zrclp\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.178081 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:32 crc kubenswrapper[4909]: I1126 07:13:32.436513 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrclp"] Nov 26 07:13:33 crc kubenswrapper[4909]: I1126 07:13:33.373107 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:33 crc kubenswrapper[4909]: I1126 07:13:33.373157 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:33 crc kubenswrapper[4909]: I1126 07:13:33.402030 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrclp" event={"ID":"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c","Type":"ContainerStarted","Data":"ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995"} Nov 26 07:13:33 crc kubenswrapper[4909]: I1126 07:13:33.402069 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrclp" event={"ID":"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c","Type":"ContainerStarted","Data":"d26da1212935fb4cd68b15204a189fbc47a215660d80ab3f925594933afc1bba"} Nov 26 07:13:33 crc kubenswrapper[4909]: I1126 07:13:33.424857 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:33 crc kubenswrapper[4909]: I1126 07:13:33.467153 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:35 crc kubenswrapper[4909]: I1126 07:13:35.418676 4909 generic.go:334] "Generic (PLEG): container finished" podID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerID="ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995" exitCode=0 Nov 26 07:13:35 crc kubenswrapper[4909]: I1126 07:13:35.418825 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrclp" event={"ID":"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c","Type":"ContainerDied","Data":"ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995"} Nov 26 07:13:35 crc kubenswrapper[4909]: I1126 07:13:35.821662 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c4np8"] Nov 26 07:13:35 crc kubenswrapper[4909]: I1126 07:13:35.822055 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c4np8" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="registry-server" containerID="cri-o://5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b" gracePeriod=2 Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.284457 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.424489 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-catalog-content\") pod \"e317163e-225f-461a-9c20-d783ad7a13cc\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.424541 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bclq\" (UniqueName: \"kubernetes.io/projected/e317163e-225f-461a-9c20-d783ad7a13cc-kube-api-access-4bclq\") pod \"e317163e-225f-461a-9c20-d783ad7a13cc\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.424615 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-utilities\") pod \"e317163e-225f-461a-9c20-d783ad7a13cc\" (UID: \"e317163e-225f-461a-9c20-d783ad7a13cc\") " Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.425857 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-utilities" (OuterVolumeSpecName: "utilities") pod "e317163e-225f-461a-9c20-d783ad7a13cc" (UID: "e317163e-225f-461a-9c20-d783ad7a13cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.427825 4909 generic.go:334] "Generic (PLEG): container finished" podID="e317163e-225f-461a-9c20-d783ad7a13cc" containerID="5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b" exitCode=0 Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.427874 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4np8" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.427896 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4np8" event={"ID":"e317163e-225f-461a-9c20-d783ad7a13cc","Type":"ContainerDied","Data":"5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b"} Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.427926 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4np8" event={"ID":"e317163e-225f-461a-9c20-d783ad7a13cc","Type":"ContainerDied","Data":"1c49afd5210ba651da76153910beeca85ca0c530d2c3097a4b845857f33e18bd"} Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.427945 4909 scope.go:117] "RemoveContainer" containerID="5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.429743 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e317163e-225f-461a-9c20-d783ad7a13cc-kube-api-access-4bclq" (OuterVolumeSpecName: "kube-api-access-4bclq") pod "e317163e-225f-461a-9c20-d783ad7a13cc" (UID: "e317163e-225f-461a-9c20-d783ad7a13cc"). InnerVolumeSpecName "kube-api-access-4bclq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.430433 4909 generic.go:334] "Generic (PLEG): container finished" podID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerID="8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428" exitCode=0 Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.430620 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrclp" event={"ID":"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c","Type":"ContainerDied","Data":"8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428"} Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.478066 4909 scope.go:117] "RemoveContainer" containerID="23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.497868 4909 scope.go:117] "RemoveContainer" containerID="22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.507199 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e317163e-225f-461a-9c20-d783ad7a13cc" (UID: "e317163e-225f-461a-9c20-d783ad7a13cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.511882 4909 scope.go:117] "RemoveContainer" containerID="5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b" Nov 26 07:13:36 crc kubenswrapper[4909]: E1126 07:13:36.512228 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b\": container with ID starting with 5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b not found: ID does not exist" containerID="5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.512276 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b"} err="failed to get container status \"5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b\": rpc error: code = NotFound desc = could not find container \"5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b\": container with ID starting with 5d03cfc5f87305774b74e1b1efee61795c7163bbe7090fd972cc084eeada6a8b not found: ID does not exist" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.512334 4909 scope.go:117] "RemoveContainer" containerID="23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c" Nov 26 07:13:36 crc kubenswrapper[4909]: E1126 07:13:36.512672 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c\": container with ID starting with 23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c not found: ID does not exist" containerID="23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.512709 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c"} err="failed to get container status \"23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c\": rpc error: code = NotFound desc = could not find container \"23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c\": container with ID starting with 23319e2cdfd214cd2db2f9402e0d3e4e3b8d608b245271a06a43f0720dfb355c not found: ID does not exist" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.512733 4909 scope.go:117] "RemoveContainer" containerID="22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e" Nov 26 07:13:36 crc kubenswrapper[4909]: E1126 07:13:36.512975 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e\": container with ID starting with 22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e not found: ID does not exist" containerID="22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.513003 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e"} err="failed to get container status \"22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e\": rpc error: code = NotFound desc = could not find container \"22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e\": container with ID starting with 22428a52538dfc4517762e358c6604f01eb411149268a5f466eb7fc88cb3d83e not found: ID does not exist" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.525932 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.526010 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e317163e-225f-461a-9c20-d783ad7a13cc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.526053 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bclq\" (UniqueName: \"kubernetes.io/projected/e317163e-225f-461a-9c20-d783ad7a13cc-kube-api-access-4bclq\") on node \"crc\" DevicePath \"\"" Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.760378 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c4np8"] Nov 26 07:13:36 crc kubenswrapper[4909]: I1126 07:13:36.770106 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c4np8"] Nov 26 07:13:37 crc kubenswrapper[4909]: I1126 07:13:37.440139 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrclp" event={"ID":"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c","Type":"ContainerStarted","Data":"5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb"} Nov 26 07:13:37 crc kubenswrapper[4909]: I1126 07:13:37.461885 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zrclp" podStartSLOduration=4.932112279 podStartE2EDuration="6.461861892s" podCreationTimestamp="2025-11-26 07:13:31 +0000 UTC" firstStartedPulling="2025-11-26 07:13:35.421220703 +0000 UTC m=+787.567431889" lastFinishedPulling="2025-11-26 07:13:36.950970326 +0000 UTC m=+789.097181502" observedRunningTime="2025-11-26 07:13:37.455979073 +0000 UTC m=+789.602190269" watchObservedRunningTime="2025-11-26 07:13:37.461861892 +0000 UTC m=+789.608073078" Nov 26 07:13:38 crc kubenswrapper[4909]: I1126 07:13:38.509220 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" path="/var/lib/kubelet/pods/e317163e-225f-461a-9c20-d783ad7a13cc/volumes" Nov 26 07:13:42 crc kubenswrapper[4909]: I1126 07:13:42.178213 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:42 crc kubenswrapper[4909]: I1126 07:13:42.178253 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:42 crc kubenswrapper[4909]: I1126 07:13:42.217378 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:42 crc kubenswrapper[4909]: I1126 07:13:42.523663 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:42 crc kubenswrapper[4909]: I1126 07:13:42.576767 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrclp"] Nov 26 07:13:42 crc kubenswrapper[4909]: I1126 07:13:42.608790 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.239046 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq"] Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.241081 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="extract-utilities" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.241297 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="extract-utilities" Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.241487 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="registry-server" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.241638 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="registry-server" Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.241799 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="extract-content" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.241937 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="extract-content" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.242292 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e317163e-225f-461a-9c20-d783ad7a13cc" containerName="registry-server" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.243027 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.246028 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.247717 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zm4pk" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.261954 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wf87k"] Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.264955 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.267256 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.269767 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq"] Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.272478 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.327785 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfrh\" (UniqueName: \"kubernetes.io/projected/33795123-6b00-438c-8dc7-b298f7c66924-kube-api-access-5hfrh\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.327861 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/33795123-6b00-438c-8dc7-b298f7c66924-frr-startup\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.327882 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvnjg\" (UniqueName: \"kubernetes.io/projected/b1858595-566b-40f9-bf2b-bb6e1bd5990a-kube-api-access-xvnjg\") pod \"frr-k8s-webhook-server-6998585d5-xw4zq\" (UID: \"b1858595-566b-40f9-bf2b-bb6e1bd5990a\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.327908 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-frr-sockets\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.327949 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1858595-566b-40f9-bf2b-bb6e1bd5990a-cert\") pod \"frr-k8s-webhook-server-6998585d5-xw4zq\" (UID: \"b1858595-566b-40f9-bf2b-bb6e1bd5990a\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.327976 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-reloader\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.328015 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-frr-conf\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.328041 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33795123-6b00-438c-8dc7-b298f7c66924-metrics-certs\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.328066 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-metrics\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.328129 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-9glvr"] Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.329305 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.331798 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.334031 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qpv4w"] Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.334927 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.337743 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.337898 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.337937 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.338024 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ljk2q" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.343122 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-9glvr"] Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429170 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1c98a2e9-110c-44a2-8d31-39e894c7c759-metallb-excludel2\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429225 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-frr-conf\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429245 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429262 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f633473-e125-4441-a526-ea45f81f39a3-cert\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429296 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33795123-6b00-438c-8dc7-b298f7c66924-metrics-certs\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429312 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cpf\" (UniqueName: \"kubernetes.io/projected/6f633473-e125-4441-a526-ea45f81f39a3-kube-api-access-z9cpf\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429343 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-metrics\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429364 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f633473-e125-4441-a526-ea45f81f39a3-metrics-certs\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429397 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfrh\" (UniqueName: \"kubernetes.io/projected/33795123-6b00-438c-8dc7-b298f7c66924-kube-api-access-5hfrh\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429420 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/33795123-6b00-438c-8dc7-b298f7c66924-frr-startup\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429441 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvnjg\" (UniqueName: \"kubernetes.io/projected/b1858595-566b-40f9-bf2b-bb6e1bd5990a-kube-api-access-xvnjg\") pod \"frr-k8s-webhook-server-6998585d5-xw4zq\" (UID: \"b1858595-566b-40f9-bf2b-bb6e1bd5990a\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429467 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-frr-sockets\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429485 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1858595-566b-40f9-bf2b-bb6e1bd5990a-cert\") pod \"frr-k8s-webhook-server-6998585d5-xw4zq\" (UID: \"b1858595-566b-40f9-bf2b-bb6e1bd5990a\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429501 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-reloader\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429517 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-metrics-certs\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.429543 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgk6z\" (UniqueName: \"kubernetes.io/projected/1c98a2e9-110c-44a2-8d31-39e894c7c759-kube-api-access-jgk6z\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.429704 4909 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.429749 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33795123-6b00-438c-8dc7-b298f7c66924-metrics-certs podName:33795123-6b00-438c-8dc7-b298f7c66924 nodeName:}" failed. No retries permitted until 2025-11-26 07:13:43.929733185 +0000 UTC m=+796.075944351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33795123-6b00-438c-8dc7-b298f7c66924-metrics-certs") pod "frr-k8s-wf87k" (UID: "33795123-6b00-438c-8dc7-b298f7c66924") : secret "frr-k8s-certs-secret" not found Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.430830 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-frr-conf\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.431081 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-frr-sockets\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.431133 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-metrics\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.431272 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/33795123-6b00-438c-8dc7-b298f7c66924-reloader\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.432330 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/33795123-6b00-438c-8dc7-b298f7c66924-frr-startup\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.448679 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1858595-566b-40f9-bf2b-bb6e1bd5990a-cert\") pod \"frr-k8s-webhook-server-6998585d5-xw4zq\" (UID: \"b1858595-566b-40f9-bf2b-bb6e1bd5990a\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.456256 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfrh\" (UniqueName: \"kubernetes.io/projected/33795123-6b00-438c-8dc7-b298f7c66924-kube-api-access-5hfrh\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.457473 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvnjg\" (UniqueName: \"kubernetes.io/projected/b1858595-566b-40f9-bf2b-bb6e1bd5990a-kube-api-access-xvnjg\") pod \"frr-k8s-webhook-server-6998585d5-xw4zq\" (UID: \"b1858595-566b-40f9-bf2b-bb6e1bd5990a\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.530923 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f633473-e125-4441-a526-ea45f81f39a3-metrics-certs\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.531715 4909 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.531799 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-metrics-certs podName:1c98a2e9-110c-44a2-8d31-39e894c7c759 nodeName:}" failed. No retries permitted until 2025-11-26 07:13:44.031781375 +0000 UTC m=+796.177992541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-metrics-certs") pod "speaker-qpv4w" (UID: "1c98a2e9-110c-44a2-8d31-39e894c7c759") : secret "speaker-certs-secret" not found Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.531527 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-metrics-certs\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.532341 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgk6z\" (UniqueName: \"kubernetes.io/projected/1c98a2e9-110c-44a2-8d31-39e894c7c759-kube-api-access-jgk6z\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.532399 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1c98a2e9-110c-44a2-8d31-39e894c7c759-metallb-excludel2\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.533365 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.533305 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1c98a2e9-110c-44a2-8d31-39e894c7c759-metallb-excludel2\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.533459 4909 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 26 07:13:43 crc kubenswrapper[4909]: E1126 07:13:43.533501 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist podName:1c98a2e9-110c-44a2-8d31-39e894c7c759 nodeName:}" failed. No retries permitted until 2025-11-26 07:13:44.033486071 +0000 UTC m=+796.179697237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist") pod "speaker-qpv4w" (UID: "1c98a2e9-110c-44a2-8d31-39e894c7c759") : secret "metallb-memberlist" not found Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.533387 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f633473-e125-4441-a526-ea45f81f39a3-cert\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.533829 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9cpf\" (UniqueName: \"kubernetes.io/projected/6f633473-e125-4441-a526-ea45f81f39a3-kube-api-access-z9cpf\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.534757 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.534851 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f633473-e125-4441-a526-ea45f81f39a3-metrics-certs\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.548066 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f633473-e125-4441-a526-ea45f81f39a3-cert\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.554748 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgk6z\" (UniqueName: \"kubernetes.io/projected/1c98a2e9-110c-44a2-8d31-39e894c7c759-kube-api-access-jgk6z\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.556139 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9cpf\" (UniqueName: \"kubernetes.io/projected/6f633473-e125-4441-a526-ea45f81f39a3-kube-api-access-z9cpf\") pod \"controller-6c7b4b5f48-9glvr\" (UID: \"6f633473-e125-4441-a526-ea45f81f39a3\") " pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.557739 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.657001 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.942899 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33795123-6b00-438c-8dc7-b298f7c66924-metrics-certs\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:43 crc kubenswrapper[4909]: I1126 07:13:43.947073 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33795123-6b00-438c-8dc7-b298f7c66924-metrics-certs\") pod \"frr-k8s-wf87k\" (UID: \"33795123-6b00-438c-8dc7-b298f7c66924\") " pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.019139 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq"] Nov 26 07:13:44 crc kubenswrapper[4909]: W1126 07:13:44.032090 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1858595_566b_40f9_bf2b_bb6e1bd5990a.slice/crio-37f824036b7f3531c0432492f2d4ae25ec1885c6860a9e9bec009c032852b95f WatchSource:0}: Error finding container 37f824036b7f3531c0432492f2d4ae25ec1885c6860a9e9bec009c032852b95f: Status 404 returned error can't find the container with id 37f824036b7f3531c0432492f2d4ae25ec1885c6860a9e9bec009c032852b95f Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.044884 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:44 crc kubenswrapper[4909]: E1126 07:13:44.045254 4909 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 26 07:13:44 crc kubenswrapper[4909]: E1126 07:13:44.045534 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist podName:1c98a2e9-110c-44a2-8d31-39e894c7c759 nodeName:}" failed. No retries permitted until 2025-11-26 07:13:45.045511089 +0000 UTC m=+797.191722255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist") pod "speaker-qpv4w" (UID: "1c98a2e9-110c-44a2-8d31-39e894c7c759") : secret "metallb-memberlist" not found Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.045744 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-metrics-certs\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.050190 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-metrics-certs\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.053930 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-9glvr"] Nov 26 07:13:44 crc kubenswrapper[4909]: W1126 07:13:44.056541 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f633473_e125_4441_a526_ea45f81f39a3.slice/crio-87a7e2158b068e59025d257ec4bcdb6667d41ffac7c9d1471a664a4a95cc7575 WatchSource:0}: Error finding container 87a7e2158b068e59025d257ec4bcdb6667d41ffac7c9d1471a664a4a95cc7575: Status 404 returned error can't find the container with id 87a7e2158b068e59025d257ec4bcdb6667d41ffac7c9d1471a664a4a95cc7575 Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.193256 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.484579 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" event={"ID":"b1858595-566b-40f9-bf2b-bb6e1bd5990a","Type":"ContainerStarted","Data":"37f824036b7f3531c0432492f2d4ae25ec1885c6860a9e9bec009c032852b95f"} Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.486065 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-9glvr" event={"ID":"6f633473-e125-4441-a526-ea45f81f39a3","Type":"ContainerStarted","Data":"71b8e0c50cf38acf930289934565ecb8e98b9266d913d941a60a01959fafa9ad"} Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.486110 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-9glvr" event={"ID":"6f633473-e125-4441-a526-ea45f81f39a3","Type":"ContainerStarted","Data":"2328b57c35e72d30e13910b941fdfa0d15cfd53d7d974ef1026b2e9b62170c76"} Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.486126 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-9glvr" event={"ID":"6f633473-e125-4441-a526-ea45f81f39a3","Type":"ContainerStarted","Data":"87a7e2158b068e59025d257ec4bcdb6667d41ffac7c9d1471a664a4a95cc7575"} Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.486168 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.486849 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerStarted","Data":"db6992c32c5f3028c99aa24e8e2ff81c995b525c93fd9865bf0f7036180c3bbd"} Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.487023 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zrclp" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="registry-server" containerID="cri-o://5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb" gracePeriod=2 Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.505697 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-9glvr" podStartSLOduration=1.505675184 podStartE2EDuration="1.505675184s" podCreationTimestamp="2025-11-26 07:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:13:44.502375455 +0000 UTC m=+796.648586621" watchObservedRunningTime="2025-11-26 07:13:44.505675184 +0000 UTC m=+796.651886350" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.868531 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.870725 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5jfwm"] Nov 26 07:13:44 crc kubenswrapper[4909]: E1126 07:13:44.871002 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="extract-content" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.871016 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="extract-content" Nov 26 07:13:44 crc kubenswrapper[4909]: E1126 07:13:44.871030 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="extract-utilities" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.871037 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="extract-utilities" Nov 26 07:13:44 crc kubenswrapper[4909]: E1126 07:13:44.871045 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="registry-server" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.871052 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="registry-server" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.871185 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerName="registry-server" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.872267 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.879695 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jfwm"] Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.959404 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpknd\" (UniqueName: \"kubernetes.io/projected/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-kube-api-access-gpknd\") pod \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.959442 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-catalog-content\") pod \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.959552 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-utilities\") pod \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\" (UID: \"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c\") " Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.959927 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-utilities\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.959947 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-catalog-content\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.959973 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4tld\" (UniqueName: \"kubernetes.io/projected/8e650c64-3c73-4c67-970c-32fbc391b24d-kube-api-access-k4tld\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.961789 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-utilities" (OuterVolumeSpecName: "utilities") pod "b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" (UID: "b7bc0290-b2ce-4e30-b4ce-11b59ab0644c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.966334 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-kube-api-access-gpknd" (OuterVolumeSpecName: "kube-api-access-gpknd") pod "b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" (UID: "b7bc0290-b2ce-4e30-b4ce-11b59ab0644c"). InnerVolumeSpecName "kube-api-access-gpknd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:13:44 crc kubenswrapper[4909]: I1126 07:13:44.987226 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" (UID: "b7bc0290-b2ce-4e30-b4ce-11b59ab0644c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061302 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-utilities\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061354 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-catalog-content\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061396 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4tld\" (UniqueName: \"kubernetes.io/projected/8e650c64-3c73-4c67-970c-32fbc391b24d-kube-api-access-k4tld\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061435 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061559 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpknd\" (UniqueName: \"kubernetes.io/projected/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-kube-api-access-gpknd\") on node \"crc\" DevicePath \"\"" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061574 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061606 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061941 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-utilities\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.061968 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-catalog-content\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.065891 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c98a2e9-110c-44a2-8d31-39e894c7c759-memberlist\") pod \"speaker-qpv4w\" (UID: \"1c98a2e9-110c-44a2-8d31-39e894c7c759\") " pod="metallb-system/speaker-qpv4w" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.078758 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4tld\" (UniqueName: \"kubernetes.io/projected/8e650c64-3c73-4c67-970c-32fbc391b24d-kube-api-access-k4tld\") pod \"certified-operators-5jfwm\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.159003 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpv4w" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.185015 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.505535 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpv4w" event={"ID":"1c98a2e9-110c-44a2-8d31-39e894c7c759","Type":"ContainerStarted","Data":"276eabebbfc04fd63eda022804b7653aa688729dca83167fff3a6135dff75194"} Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.510184 4909 generic.go:334] "Generic (PLEG): container finished" podID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" containerID="5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb" exitCode=0 Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.512619 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrclp" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.516735 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrclp" event={"ID":"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c","Type":"ContainerDied","Data":"5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb"} Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.516803 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrclp" event={"ID":"b7bc0290-b2ce-4e30-b4ce-11b59ab0644c","Type":"ContainerDied","Data":"d26da1212935fb4cd68b15204a189fbc47a215660d80ab3f925594933afc1bba"} Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.516832 4909 scope.go:117] "RemoveContainer" containerID="5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.528092 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jfwm"] Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.544122 4909 scope.go:117] "RemoveContainer" containerID="8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.557005 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrclp"] Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.557667 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrclp"] Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.578224 4909 scope.go:117] "RemoveContainer" containerID="ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.664361 4909 scope.go:117] "RemoveContainer" containerID="5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb" Nov 26 07:13:45 crc kubenswrapper[4909]: E1126 07:13:45.669778 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb\": container with ID starting with 5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb not found: ID does not exist" containerID="5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.669841 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb"} err="failed to get container status \"5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb\": rpc error: code = NotFound desc = could not find container \"5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb\": container with ID starting with 5ce947c2b9e76a6f0e9482718a65b1a4498861f9b5a4f1b59b315c1e15276ecb not found: ID does not exist" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.669871 4909 scope.go:117] "RemoveContainer" containerID="8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428" Nov 26 07:13:45 crc kubenswrapper[4909]: E1126 07:13:45.670291 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428\": container with ID starting with 8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428 not found: ID does not exist" containerID="8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.670315 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428"} err="failed to get container status \"8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428\": rpc error: code = NotFound desc = could not find container \"8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428\": container with ID starting with 8ba3e5b0b1bec7de94de2976a7e0904e9afc787fec5576a37cc1083240729428 not found: ID does not exist" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.670333 4909 scope.go:117] "RemoveContainer" containerID="ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995" Nov 26 07:13:45 crc kubenswrapper[4909]: E1126 07:13:45.670537 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995\": container with ID starting with ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995 not found: ID does not exist" containerID="ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995" Nov 26 07:13:45 crc kubenswrapper[4909]: I1126 07:13:45.670556 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995"} err="failed to get container status \"ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995\": rpc error: code = NotFound desc = could not find container \"ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995\": container with ID starting with ddbc878bb409c33593c2e33f6580db3731483557b7f4cdcdbbe32692e6998995 not found: ID does not exist" Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.517657 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7bc0290-b2ce-4e30-b4ce-11b59ab0644c" path="/var/lib/kubelet/pods/b7bc0290-b2ce-4e30-b4ce-11b59ab0644c/volumes" Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.528982 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpv4w" event={"ID":"1c98a2e9-110c-44a2-8d31-39e894c7c759","Type":"ContainerStarted","Data":"f2e61c11bddda0ef541137f5fe931bbc0ee6ab577a10d29c57eafe636151c12c"} Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.529033 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpv4w" event={"ID":"1c98a2e9-110c-44a2-8d31-39e894c7c759","Type":"ContainerStarted","Data":"88c5c8680d31c488291aa08a7f9c80987d5b8de1cd6b0ed70646e01d3db8625d"} Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.529807 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qpv4w" Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.533457 4909 generic.go:334] "Generic (PLEG): container finished" podID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerID="ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a" exitCode=0 Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.533628 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jfwm" event={"ID":"8e650c64-3c73-4c67-970c-32fbc391b24d","Type":"ContainerDied","Data":"ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a"} Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.533685 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jfwm" event={"ID":"8e650c64-3c73-4c67-970c-32fbc391b24d","Type":"ContainerStarted","Data":"cecb5475f4600cc8b8bd1abeeb68d9ae794a3e6563e02d29154c5e97345d0a0c"} Nov 26 07:13:46 crc kubenswrapper[4909]: I1126 07:13:46.551847 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qpv4w" podStartSLOduration=3.551827073 podStartE2EDuration="3.551827073s" podCreationTimestamp="2025-11-26 07:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:13:46.545514522 +0000 UTC m=+798.691725698" watchObservedRunningTime="2025-11-26 07:13:46.551827073 +0000 UTC m=+798.698038229" Nov 26 07:13:47 crc kubenswrapper[4909]: I1126 07:13:47.554126 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jfwm" event={"ID":"8e650c64-3c73-4c67-970c-32fbc391b24d","Type":"ContainerStarted","Data":"1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30"} Nov 26 07:13:48 crc kubenswrapper[4909]: I1126 07:13:48.562179 4909 generic.go:334] "Generic (PLEG): container finished" podID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerID="1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30" exitCode=0 Nov 26 07:13:48 crc kubenswrapper[4909]: I1126 07:13:48.563231 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jfwm" event={"ID":"8e650c64-3c73-4c67-970c-32fbc391b24d","Type":"ContainerDied","Data":"1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30"} Nov 26 07:13:51 crc kubenswrapper[4909]: I1126 07:13:51.579355 4909 generic.go:334] "Generic (PLEG): container finished" podID="33795123-6b00-438c-8dc7-b298f7c66924" containerID="39c6959f39875774ed32fab4bc9af553488df0294956703e7675ce34683a6e4d" exitCode=0 Nov 26 07:13:51 crc kubenswrapper[4909]: I1126 07:13:51.579586 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerDied","Data":"39c6959f39875774ed32fab4bc9af553488df0294956703e7675ce34683a6e4d"} Nov 26 07:13:51 crc kubenswrapper[4909]: I1126 07:13:51.581437 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" event={"ID":"b1858595-566b-40f9-bf2b-bb6e1bd5990a","Type":"ContainerStarted","Data":"271b8eff2d84784e706fb5f5692bbbfa6eda6589219779a963a3021088c4c71f"} Nov 26 07:13:51 crc kubenswrapper[4909]: I1126 07:13:51.581746 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:13:51 crc kubenswrapper[4909]: I1126 07:13:51.583998 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jfwm" event={"ID":"8e650c64-3c73-4c67-970c-32fbc391b24d","Type":"ContainerStarted","Data":"dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4"} Nov 26 07:13:51 crc kubenswrapper[4909]: I1126 07:13:51.630736 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5jfwm" podStartSLOduration=2.9102195760000003 podStartE2EDuration="7.630714553s" podCreationTimestamp="2025-11-26 07:13:44 +0000 UTC" firstStartedPulling="2025-11-26 07:13:46.535485391 +0000 UTC m=+798.681696557" lastFinishedPulling="2025-11-26 07:13:51.255980368 +0000 UTC m=+803.402191534" observedRunningTime="2025-11-26 07:13:51.630385694 +0000 UTC m=+803.776596870" watchObservedRunningTime="2025-11-26 07:13:51.630714553 +0000 UTC m=+803.776925719" Nov 26 07:13:52 crc kubenswrapper[4909]: I1126 07:13:52.591543 4909 generic.go:334] "Generic (PLEG): container finished" podID="33795123-6b00-438c-8dc7-b298f7c66924" containerID="0feb2b36f48b4b82a5885b0414db70ce338b555b995c0a0641ee36b390985bce" exitCode=0 Nov 26 07:13:52 crc kubenswrapper[4909]: I1126 07:13:52.592531 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerDied","Data":"0feb2b36f48b4b82a5885b0414db70ce338b555b995c0a0641ee36b390985bce"} Nov 26 07:13:52 crc kubenswrapper[4909]: I1126 07:13:52.616123 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" podStartSLOduration=2.394301048 podStartE2EDuration="9.616104933s" podCreationTimestamp="2025-11-26 07:13:43 +0000 UTC" firstStartedPulling="2025-11-26 07:13:44.034432819 +0000 UTC m=+796.180643985" lastFinishedPulling="2025-11-26 07:13:51.256236664 +0000 UTC m=+803.402447870" observedRunningTime="2025-11-26 07:13:51.652392749 +0000 UTC m=+803.798603905" watchObservedRunningTime="2025-11-26 07:13:52.616104933 +0000 UTC m=+804.762316099" Nov 26 07:13:53 crc kubenswrapper[4909]: I1126 07:13:53.603100 4909 generic.go:334] "Generic (PLEG): container finished" podID="33795123-6b00-438c-8dc7-b298f7c66924" containerID="3cdfb8e159d6fe222d118143c781ef671202a8cf0d669d35bdceac6ed8d3b34e" exitCode=0 Nov 26 07:13:53 crc kubenswrapper[4909]: I1126 07:13:53.603155 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerDied","Data":"3cdfb8e159d6fe222d118143c781ef671202a8cf0d669d35bdceac6ed8d3b34e"} Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.612157 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerStarted","Data":"3826ca93a3e363ec92c3f7813fee1aba23c0041a768583dca86a914f42e103da"} Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.612454 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.612464 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerStarted","Data":"f8340c0df2680fbdd1f409454f859c5967acd3818f9c1caff63371c962e03cd7"} Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.612473 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerStarted","Data":"370016b1b79a0f4154d866af0def1294b79872a3527599ee339a1dc2f623e043"} Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.612481 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerStarted","Data":"3019d35718568b2c3a813e42fd3db0fd519e353b3ed314d9ec2996c66e72439f"} Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.612488 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerStarted","Data":"4c7f7c8f8e7b14916ac3851a2bb1c805e87bbe2bb9d89bb69c4d8f955c4e1579"} Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.612496 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wf87k" event={"ID":"33795123-6b00-438c-8dc7-b298f7c66924","Type":"ContainerStarted","Data":"7031fedbe45fb5a5745789ea944b75b30d200f3db3e66ae0d30ece1c5e99ab2e"} Nov 26 07:13:54 crc kubenswrapper[4909]: I1126 07:13:54.634817 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wf87k" podStartSLOduration=4.700087108 podStartE2EDuration="11.634796648s" podCreationTimestamp="2025-11-26 07:13:43 +0000 UTC" firstStartedPulling="2025-11-26 07:13:44.349665975 +0000 UTC m=+796.495877141" lastFinishedPulling="2025-11-26 07:13:51.284375515 +0000 UTC m=+803.430586681" observedRunningTime="2025-11-26 07:13:54.630667247 +0000 UTC m=+806.776878433" watchObservedRunningTime="2025-11-26 07:13:54.634796648 +0000 UTC m=+806.781007814" Nov 26 07:13:55 crc kubenswrapper[4909]: I1126 07:13:55.163778 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qpv4w" Nov 26 07:13:55 crc kubenswrapper[4909]: I1126 07:13:55.185778 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:55 crc kubenswrapper[4909]: I1126 07:13:55.185832 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:55 crc kubenswrapper[4909]: I1126 07:13:55.249620 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:13:55 crc kubenswrapper[4909]: I1126 07:13:55.996105 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rbpcf"] Nov 26 07:13:55 crc kubenswrapper[4909]: I1126 07:13:55.997439 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.005085 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rbpcf"] Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.008765 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-catalog-content\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.008850 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-utilities\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.008882 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59jm\" (UniqueName: \"kubernetes.io/projected/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-kube-api-access-f59jm\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.109867 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-catalog-content\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.109954 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-utilities\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.109978 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59jm\" (UniqueName: \"kubernetes.io/projected/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-kube-api-access-f59jm\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.110829 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-catalog-content\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.111072 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-utilities\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.126972 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59jm\" (UniqueName: \"kubernetes.io/projected/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-kube-api-access-f59jm\") pod \"community-operators-rbpcf\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.315832 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.593736 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rbpcf"] Nov 26 07:13:56 crc kubenswrapper[4909]: W1126 07:13:56.601411 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0e25f62_4de5_4f21_ab7e_eeca1aa2414c.slice/crio-72a802aa4f2d3da601e8d9ae0db8d2509a1fbcd65747232244775f3f29ad2e7e WatchSource:0}: Error finding container 72a802aa4f2d3da601e8d9ae0db8d2509a1fbcd65747232244775f3f29ad2e7e: Status 404 returned error can't find the container with id 72a802aa4f2d3da601e8d9ae0db8d2509a1fbcd65747232244775f3f29ad2e7e Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.629485 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbpcf" event={"ID":"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c","Type":"ContainerStarted","Data":"72a802aa4f2d3da601e8d9ae0db8d2509a1fbcd65747232244775f3f29ad2e7e"} Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.767153 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk"] Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.768362 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.770560 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.816037 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk"] Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.821148 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6djlh\" (UniqueName: \"kubernetes.io/projected/aad78080-9712-4159-9318-7b3eefb0cb7b-kube-api-access-6djlh\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.821210 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.821243 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.922773 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6djlh\" (UniqueName: \"kubernetes.io/projected/aad78080-9712-4159-9318-7b3eefb0cb7b-kube-api-access-6djlh\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.922818 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.922838 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.923274 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.923414 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:56 crc kubenswrapper[4909]: I1126 07:13:56.943919 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6djlh\" (UniqueName: \"kubernetes.io/projected/aad78080-9712-4159-9318-7b3eefb0cb7b-kube-api-access-6djlh\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:57 crc kubenswrapper[4909]: I1126 07:13:57.084437 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:13:57 crc kubenswrapper[4909]: I1126 07:13:57.298493 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk"] Nov 26 07:13:57 crc kubenswrapper[4909]: W1126 07:13:57.306947 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaad78080_9712_4159_9318_7b3eefb0cb7b.slice/crio-925e6dea9d05265319fe86c75f72272d276f34659421ccfdd99afb03b4024d32 WatchSource:0}: Error finding container 925e6dea9d05265319fe86c75f72272d276f34659421ccfdd99afb03b4024d32: Status 404 returned error can't find the container with id 925e6dea9d05265319fe86c75f72272d276f34659421ccfdd99afb03b4024d32 Nov 26 07:13:57 crc kubenswrapper[4909]: I1126 07:13:57.639444 4909 generic.go:334] "Generic (PLEG): container finished" podID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerID="ff875048337a900471233f315835fd75ec0625627890b0e322b65671729bfb05" exitCode=0 Nov 26 07:13:57 crc kubenswrapper[4909]: I1126 07:13:57.639548 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbpcf" event={"ID":"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c","Type":"ContainerDied","Data":"ff875048337a900471233f315835fd75ec0625627890b0e322b65671729bfb05"} Nov 26 07:13:57 crc kubenswrapper[4909]: I1126 07:13:57.641284 4909 generic.go:334] "Generic (PLEG): container finished" podID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerID="a67abaa1b5726863095f87b72d7d402279e7162a29c8c39abfa289bbb2ccd42d" exitCode=0 Nov 26 07:13:57 crc kubenswrapper[4909]: I1126 07:13:57.641334 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" event={"ID":"aad78080-9712-4159-9318-7b3eefb0cb7b","Type":"ContainerDied","Data":"a67abaa1b5726863095f87b72d7d402279e7162a29c8c39abfa289bbb2ccd42d"} Nov 26 07:13:57 crc kubenswrapper[4909]: I1126 07:13:57.641366 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" event={"ID":"aad78080-9712-4159-9318-7b3eefb0cb7b","Type":"ContainerStarted","Data":"925e6dea9d05265319fe86c75f72272d276f34659421ccfdd99afb03b4024d32"} Nov 26 07:13:59 crc kubenswrapper[4909]: I1126 07:13:59.194645 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wf87k" Nov 26 07:13:59 crc kubenswrapper[4909]: I1126 07:13:59.282756 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wf87k" Nov 26 07:14:00 crc kubenswrapper[4909]: I1126 07:14:00.659656 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbpcf" event={"ID":"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c","Type":"ContainerStarted","Data":"31bdf65ec82156a51e731101e7baf9a9abf4f69801837d9cd281e2f41985d725"} Nov 26 07:14:01 crc kubenswrapper[4909]: I1126 07:14:01.669917 4909 generic.go:334] "Generic (PLEG): container finished" podID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerID="31bdf65ec82156a51e731101e7baf9a9abf4f69801837d9cd281e2f41985d725" exitCode=0 Nov 26 07:14:01 crc kubenswrapper[4909]: I1126 07:14:01.669961 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbpcf" event={"ID":"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c","Type":"ContainerDied","Data":"31bdf65ec82156a51e731101e7baf9a9abf4f69801837d9cd281e2f41985d725"} Nov 26 07:14:03 crc kubenswrapper[4909]: I1126 07:14:03.564808 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-xw4zq" Nov 26 07:14:03 crc kubenswrapper[4909]: I1126 07:14:03.661674 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-9glvr" Nov 26 07:14:03 crc kubenswrapper[4909]: I1126 07:14:03.690081 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbpcf" event={"ID":"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c","Type":"ContainerStarted","Data":"3837bf948b431e1ab306164eabd111514d296970459850a6ca4aa83cc9466f1d"} Nov 26 07:14:03 crc kubenswrapper[4909]: I1126 07:14:03.691925 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" event={"ID":"aad78080-9712-4159-9318-7b3eefb0cb7b","Type":"ContainerStarted","Data":"0c1f24212a5a329ff88381ef05bcb7b45e74126f5cb26c0c795f1d88a928a9e6"} Nov 26 07:14:03 crc kubenswrapper[4909]: I1126 07:14:03.709695 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rbpcf" podStartSLOduration=2.845129373 podStartE2EDuration="8.709675971s" podCreationTimestamp="2025-11-26 07:13:55 +0000 UTC" firstStartedPulling="2025-11-26 07:13:57.641156776 +0000 UTC m=+809.787367942" lastFinishedPulling="2025-11-26 07:14:03.505703374 +0000 UTC m=+815.651914540" observedRunningTime="2025-11-26 07:14:03.706072063 +0000 UTC m=+815.852283239" watchObservedRunningTime="2025-11-26 07:14:03.709675971 +0000 UTC m=+815.855887147" Nov 26 07:14:04 crc kubenswrapper[4909]: I1126 07:14:04.196389 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wf87k" Nov 26 07:14:04 crc kubenswrapper[4909]: I1126 07:14:04.698306 4909 generic.go:334] "Generic (PLEG): container finished" podID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerID="0c1f24212a5a329ff88381ef05bcb7b45e74126f5cb26c0c795f1d88a928a9e6" exitCode=0 Nov 26 07:14:04 crc kubenswrapper[4909]: I1126 07:14:04.698616 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" event={"ID":"aad78080-9712-4159-9318-7b3eefb0cb7b","Type":"ContainerDied","Data":"0c1f24212a5a329ff88381ef05bcb7b45e74126f5cb26c0c795f1d88a928a9e6"} Nov 26 07:14:05 crc kubenswrapper[4909]: I1126 07:14:05.229372 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:14:05 crc kubenswrapper[4909]: I1126 07:14:05.707033 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" event={"ID":"aad78080-9712-4159-9318-7b3eefb0cb7b","Type":"ContainerStarted","Data":"d3b360e627730b9f64694e6cb5836123e940414c17c439c92adb27d704f612f1"} Nov 26 07:14:05 crc kubenswrapper[4909]: I1126 07:14:05.724158 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" podStartSLOduration=3.860609412 podStartE2EDuration="9.724138623s" podCreationTimestamp="2025-11-26 07:13:56 +0000 UTC" firstStartedPulling="2025-11-26 07:13:57.643464938 +0000 UTC m=+809.789676104" lastFinishedPulling="2025-11-26 07:14:03.506994149 +0000 UTC m=+815.653205315" observedRunningTime="2025-11-26 07:14:05.721873451 +0000 UTC m=+817.868084617" watchObservedRunningTime="2025-11-26 07:14:05.724138623 +0000 UTC m=+817.870349789" Nov 26 07:14:06 crc kubenswrapper[4909]: I1126 07:14:06.316277 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:14:06 crc kubenswrapper[4909]: I1126 07:14:06.316404 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:14:06 crc kubenswrapper[4909]: I1126 07:14:06.367418 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:14:06 crc kubenswrapper[4909]: I1126 07:14:06.718630 4909 generic.go:334] "Generic (PLEG): container finished" podID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerID="d3b360e627730b9f64694e6cb5836123e940414c17c439c92adb27d704f612f1" exitCode=0 Nov 26 07:14:06 crc kubenswrapper[4909]: I1126 07:14:06.718691 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" event={"ID":"aad78080-9712-4159-9318-7b3eefb0cb7b","Type":"ContainerDied","Data":"d3b360e627730b9f64694e6cb5836123e940414c17c439c92adb27d704f612f1"} Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.070067 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.219264 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-bundle\") pod \"aad78080-9712-4159-9318-7b3eefb0cb7b\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.219318 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6djlh\" (UniqueName: \"kubernetes.io/projected/aad78080-9712-4159-9318-7b3eefb0cb7b-kube-api-access-6djlh\") pod \"aad78080-9712-4159-9318-7b3eefb0cb7b\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.219418 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-util\") pod \"aad78080-9712-4159-9318-7b3eefb0cb7b\" (UID: \"aad78080-9712-4159-9318-7b3eefb0cb7b\") " Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.220346 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-bundle" (OuterVolumeSpecName: "bundle") pod "aad78080-9712-4159-9318-7b3eefb0cb7b" (UID: "aad78080-9712-4159-9318-7b3eefb0cb7b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.224479 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad78080-9712-4159-9318-7b3eefb0cb7b-kube-api-access-6djlh" (OuterVolumeSpecName: "kube-api-access-6djlh") pod "aad78080-9712-4159-9318-7b3eefb0cb7b" (UID: "aad78080-9712-4159-9318-7b3eefb0cb7b"). InnerVolumeSpecName "kube-api-access-6djlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.237139 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-util" (OuterVolumeSpecName: "util") pod "aad78080-9712-4159-9318-7b3eefb0cb7b" (UID: "aad78080-9712-4159-9318-7b3eefb0cb7b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.321308 4909 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.321348 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6djlh\" (UniqueName: \"kubernetes.io/projected/aad78080-9712-4159-9318-7b3eefb0cb7b-kube-api-access-6djlh\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.321370 4909 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aad78080-9712-4159-9318-7b3eefb0cb7b-util\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.679432 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jfwm"] Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.680039 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5jfwm" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="registry-server" containerID="cri-o://dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4" gracePeriod=2 Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.736093 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" event={"ID":"aad78080-9712-4159-9318-7b3eefb0cb7b","Type":"ContainerDied","Data":"925e6dea9d05265319fe86c75f72272d276f34659421ccfdd99afb03b4024d32"} Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.736548 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="925e6dea9d05265319fe86c75f72272d276f34659421ccfdd99afb03b4024d32" Nov 26 07:14:08 crc kubenswrapper[4909]: I1126 07:14:08.736180 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.091189 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.232749 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4tld\" (UniqueName: \"kubernetes.io/projected/8e650c64-3c73-4c67-970c-32fbc391b24d-kube-api-access-k4tld\") pod \"8e650c64-3c73-4c67-970c-32fbc391b24d\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.232864 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-utilities\") pod \"8e650c64-3c73-4c67-970c-32fbc391b24d\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.232925 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-catalog-content\") pod \"8e650c64-3c73-4c67-970c-32fbc391b24d\" (UID: \"8e650c64-3c73-4c67-970c-32fbc391b24d\") " Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.233676 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-utilities" (OuterVolumeSpecName: "utilities") pod "8e650c64-3c73-4c67-970c-32fbc391b24d" (UID: "8e650c64-3c73-4c67-970c-32fbc391b24d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.236775 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e650c64-3c73-4c67-970c-32fbc391b24d-kube-api-access-k4tld" (OuterVolumeSpecName: "kube-api-access-k4tld") pod "8e650c64-3c73-4c67-970c-32fbc391b24d" (UID: "8e650c64-3c73-4c67-970c-32fbc391b24d"). InnerVolumeSpecName "kube-api-access-k4tld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.279360 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e650c64-3c73-4c67-970c-32fbc391b24d" (UID: "8e650c64-3c73-4c67-970c-32fbc391b24d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.334889 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4tld\" (UniqueName: \"kubernetes.io/projected/8e650c64-3c73-4c67-970c-32fbc391b24d-kube-api-access-k4tld\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.334925 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.334934 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e650c64-3c73-4c67-970c-32fbc391b24d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.745367 4909 generic.go:334] "Generic (PLEG): container finished" podID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerID="dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4" exitCode=0 Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.745409 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jfwm" event={"ID":"8e650c64-3c73-4c67-970c-32fbc391b24d","Type":"ContainerDied","Data":"dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4"} Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.745435 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jfwm" event={"ID":"8e650c64-3c73-4c67-970c-32fbc391b24d","Type":"ContainerDied","Data":"cecb5475f4600cc8b8bd1abeeb68d9ae794a3e6563e02d29154c5e97345d0a0c"} Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.745451 4909 scope.go:117] "RemoveContainer" containerID="dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.745496 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jfwm" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.764447 4909 scope.go:117] "RemoveContainer" containerID="1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.777700 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jfwm"] Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.781973 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5jfwm"] Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.807762 4909 scope.go:117] "RemoveContainer" containerID="ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.822195 4909 scope.go:117] "RemoveContainer" containerID="dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4" Nov 26 07:14:09 crc kubenswrapper[4909]: E1126 07:14:09.822678 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4\": container with ID starting with dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4 not found: ID does not exist" containerID="dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.822732 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4"} err="failed to get container status \"dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4\": rpc error: code = NotFound desc = could not find container \"dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4\": container with ID starting with dbd579fa3fee82b6b54f01e69ced66011cb35e5d5e8893101af5a86f783fffb4 not found: ID does not exist" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.822764 4909 scope.go:117] "RemoveContainer" containerID="1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30" Nov 26 07:14:09 crc kubenswrapper[4909]: E1126 07:14:09.823130 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30\": container with ID starting with 1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30 not found: ID does not exist" containerID="1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.823186 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30"} err="failed to get container status \"1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30\": rpc error: code = NotFound desc = could not find container \"1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30\": container with ID starting with 1e57539034cdde76db64be82af67a91d2db68c35c36219bb0eb1a51cccf68b30 not found: ID does not exist" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.823208 4909 scope.go:117] "RemoveContainer" containerID="ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a" Nov 26 07:14:09 crc kubenswrapper[4909]: E1126 07:14:09.823582 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a\": container with ID starting with ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a not found: ID does not exist" containerID="ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a" Nov 26 07:14:09 crc kubenswrapper[4909]: I1126 07:14:09.823631 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a"} err="failed to get container status \"ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a\": rpc error: code = NotFound desc = could not find container \"ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a\": container with ID starting with ebe323b48859f22e586b431dc94bc99824c26e3b35ede67b1ec28706f9cde73a not found: ID does not exist" Nov 26 07:14:10 crc kubenswrapper[4909]: I1126 07:14:10.508912 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" path="/var/lib/kubelet/pods/8e650c64-3c73-4c67-970c-32fbc391b24d/volumes" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.929799 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t"] Nov 26 07:14:13 crc kubenswrapper[4909]: E1126 07:14:13.930019 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="registry-server" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930031 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="registry-server" Nov 26 07:14:13 crc kubenswrapper[4909]: E1126 07:14:13.930041 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="extract-utilities" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930047 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="extract-utilities" Nov 26 07:14:13 crc kubenswrapper[4909]: E1126 07:14:13.930056 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="extract-content" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930062 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="extract-content" Nov 26 07:14:13 crc kubenswrapper[4909]: E1126 07:14:13.930071 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerName="pull" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930076 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerName="pull" Nov 26 07:14:13 crc kubenswrapper[4909]: E1126 07:14:13.930084 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerName="extract" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930089 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerName="extract" Nov 26 07:14:13 crc kubenswrapper[4909]: E1126 07:14:13.930102 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerName="util" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930107 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerName="util" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930209 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e650c64-3c73-4c67-970c-32fbc391b24d" containerName="registry-server" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930226 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad78080-9712-4159-9318-7b3eefb0cb7b" containerName="extract" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.930641 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.932269 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.933114 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-zqpgt" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.938425 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.944056 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t"] Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.994817 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a55df91b-d058-466e-8004-e966917cb0bb-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cv22t\" (UID: \"a55df91b-d058-466e-8004-e966917cb0bb\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:13 crc kubenswrapper[4909]: I1126 07:14:13.994922 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnkt\" (UniqueName: \"kubernetes.io/projected/a55df91b-d058-466e-8004-e966917cb0bb-kube-api-access-6xnkt\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cv22t\" (UID: \"a55df91b-d058-466e-8004-e966917cb0bb\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:14 crc kubenswrapper[4909]: I1126 07:14:14.096513 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xnkt\" (UniqueName: \"kubernetes.io/projected/a55df91b-d058-466e-8004-e966917cb0bb-kube-api-access-6xnkt\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cv22t\" (UID: \"a55df91b-d058-466e-8004-e966917cb0bb\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:14 crc kubenswrapper[4909]: I1126 07:14:14.097013 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a55df91b-d058-466e-8004-e966917cb0bb-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cv22t\" (UID: \"a55df91b-d058-466e-8004-e966917cb0bb\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:14 crc kubenswrapper[4909]: I1126 07:14:14.097459 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a55df91b-d058-466e-8004-e966917cb0bb-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cv22t\" (UID: \"a55df91b-d058-466e-8004-e966917cb0bb\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:14 crc kubenswrapper[4909]: I1126 07:14:14.118451 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xnkt\" (UniqueName: \"kubernetes.io/projected/a55df91b-d058-466e-8004-e966917cb0bb-kube-api-access-6xnkt\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cv22t\" (UID: \"a55df91b-d058-466e-8004-e966917cb0bb\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:14 crc kubenswrapper[4909]: I1126 07:14:14.247565 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" Nov 26 07:14:14 crc kubenswrapper[4909]: I1126 07:14:14.725496 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t"] Nov 26 07:14:14 crc kubenswrapper[4909]: W1126 07:14:14.730752 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55df91b_d058_466e_8004_e966917cb0bb.slice/crio-96a3ea1c96d42ef8098c7327d521b72d8bbf73d5a295e4dca4d4648a9cc14e96 WatchSource:0}: Error finding container 96a3ea1c96d42ef8098c7327d521b72d8bbf73d5a295e4dca4d4648a9cc14e96: Status 404 returned error can't find the container with id 96a3ea1c96d42ef8098c7327d521b72d8bbf73d5a295e4dca4d4648a9cc14e96 Nov 26 07:14:14 crc kubenswrapper[4909]: I1126 07:14:14.775690 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" event={"ID":"a55df91b-d058-466e-8004-e966917cb0bb","Type":"ContainerStarted","Data":"96a3ea1c96d42ef8098c7327d521b72d8bbf73d5a295e4dca4d4648a9cc14e96"} Nov 26 07:14:16 crc kubenswrapper[4909]: I1126 07:14:16.369157 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:14:18 crc kubenswrapper[4909]: I1126 07:14:18.880947 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rbpcf"] Nov 26 07:14:18 crc kubenswrapper[4909]: I1126 07:14:18.881693 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rbpcf" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="registry-server" containerID="cri-o://3837bf948b431e1ab306164eabd111514d296970459850a6ca4aa83cc9466f1d" gracePeriod=2 Nov 26 07:14:19 crc kubenswrapper[4909]: I1126 07:14:19.808151 4909 generic.go:334] "Generic (PLEG): container finished" podID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerID="3837bf948b431e1ab306164eabd111514d296970459850a6ca4aa83cc9466f1d" exitCode=0 Nov 26 07:14:19 crc kubenswrapper[4909]: I1126 07:14:19.808212 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbpcf" event={"ID":"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c","Type":"ContainerDied","Data":"3837bf948b431e1ab306164eabd111514d296970459850a6ca4aa83cc9466f1d"} Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.274440 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.393519 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-catalog-content\") pod \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.393885 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-utilities\") pod \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.393962 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f59jm\" (UniqueName: \"kubernetes.io/projected/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-kube-api-access-f59jm\") pod \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\" (UID: \"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c\") " Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.394665 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-utilities" (OuterVolumeSpecName: "utilities") pod "a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" (UID: "a0e25f62-4de5-4f21-ab7e-eeca1aa2414c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.399631 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-kube-api-access-f59jm" (OuterVolumeSpecName: "kube-api-access-f59jm") pod "a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" (UID: "a0e25f62-4de5-4f21-ab7e-eeca1aa2414c"). InnerVolumeSpecName "kube-api-access-f59jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.447773 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" (UID: "a0e25f62-4de5-4f21-ab7e-eeca1aa2414c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.495320 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f59jm\" (UniqueName: \"kubernetes.io/projected/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-kube-api-access-f59jm\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.495353 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.495362 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.823387 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" event={"ID":"a55df91b-d058-466e-8004-e966917cb0bb","Type":"ContainerStarted","Data":"d0f934b3d5a3a8321433a81557ce46cb8a3f68e50e94bca1e042a0036538f473"} Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.826513 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbpcf" event={"ID":"a0e25f62-4de5-4f21-ab7e-eeca1aa2414c","Type":"ContainerDied","Data":"72a802aa4f2d3da601e8d9ae0db8d2509a1fbcd65747232244775f3f29ad2e7e"} Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.826566 4909 scope.go:117] "RemoveContainer" containerID="3837bf948b431e1ab306164eabd111514d296970459850a6ca4aa83cc9466f1d" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.826697 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbpcf" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.846903 4909 scope.go:117] "RemoveContainer" containerID="31bdf65ec82156a51e731101e7baf9a9abf4f69801837d9cd281e2f41985d725" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.887841 4909 scope.go:117] "RemoveContainer" containerID="ff875048337a900471233f315835fd75ec0625627890b0e322b65671729bfb05" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.888681 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cv22t" podStartSLOduration=2.312239035 podStartE2EDuration="8.888656406s" podCreationTimestamp="2025-11-26 07:14:13 +0000 UTC" firstStartedPulling="2025-11-26 07:14:14.733822721 +0000 UTC m=+826.880033887" lastFinishedPulling="2025-11-26 07:14:21.310240092 +0000 UTC m=+833.456451258" observedRunningTime="2025-11-26 07:14:21.850608886 +0000 UTC m=+833.996820052" watchObservedRunningTime="2025-11-26 07:14:21.888656406 +0000 UTC m=+834.034867572" Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.894491 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rbpcf"] Nov 26 07:14:21 crc kubenswrapper[4909]: I1126 07:14:21.898679 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rbpcf"] Nov 26 07:14:22 crc kubenswrapper[4909]: I1126 07:14:22.506827 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" path="/var/lib/kubelet/pods/a0e25f62-4de5-4f21-ab7e-eeca1aa2414c/volumes" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.591409 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bb6kf"] Nov 26 07:14:25 crc kubenswrapper[4909]: E1126 07:14:25.592002 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="registry-server" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.592020 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="registry-server" Nov 26 07:14:25 crc kubenswrapper[4909]: E1126 07:14:25.592037 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="extract-utilities" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.592046 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="extract-utilities" Nov 26 07:14:25 crc kubenswrapper[4909]: E1126 07:14:25.592063 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="extract-content" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.592070 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="extract-content" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.592191 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e25f62-4de5-4f21-ab7e-eeca1aa2414c" containerName="registry-server" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.592686 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.595168 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2kv7z" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.595602 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.598267 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.603836 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bb6kf"] Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.651912 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg5kf\" (UniqueName: \"kubernetes.io/projected/0fb520c3-031d-4c32-af9e-b4cdb73e4851-kube-api-access-tg5kf\") pod \"cert-manager-webhook-f4fb5df64-bb6kf\" (UID: \"0fb520c3-031d-4c32-af9e-b4cdb73e4851\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.652007 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fb520c3-031d-4c32-af9e-b4cdb73e4851-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bb6kf\" (UID: \"0fb520c3-031d-4c32-af9e-b4cdb73e4851\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.752868 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg5kf\" (UniqueName: \"kubernetes.io/projected/0fb520c3-031d-4c32-af9e-b4cdb73e4851-kube-api-access-tg5kf\") pod \"cert-manager-webhook-f4fb5df64-bb6kf\" (UID: \"0fb520c3-031d-4c32-af9e-b4cdb73e4851\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.752949 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fb520c3-031d-4c32-af9e-b4cdb73e4851-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bb6kf\" (UID: \"0fb520c3-031d-4c32-af9e-b4cdb73e4851\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.777248 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fb520c3-031d-4c32-af9e-b4cdb73e4851-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bb6kf\" (UID: \"0fb520c3-031d-4c32-af9e-b4cdb73e4851\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.777296 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg5kf\" (UniqueName: \"kubernetes.io/projected/0fb520c3-031d-4c32-af9e-b4cdb73e4851-kube-api-access-tg5kf\") pod \"cert-manager-webhook-f4fb5df64-bb6kf\" (UID: \"0fb520c3-031d-4c32-af9e-b4cdb73e4851\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:25 crc kubenswrapper[4909]: I1126 07:14:25.914952 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:26 crc kubenswrapper[4909]: I1126 07:14:26.217139 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bb6kf"] Nov 26 07:14:26 crc kubenswrapper[4909]: I1126 07:14:26.856880 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" event={"ID":"0fb520c3-031d-4c32-af9e-b4cdb73e4851","Type":"ContainerStarted","Data":"01ded1c8667f5d15da6d946cedee83377f3f45646ae39cb2571fdfdfe03ab4e9"} Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.132718 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-86dsq"] Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.134634 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.136982 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2m54m" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.150529 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-86dsq"] Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.188788 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff1a0925-55ac-478f-a400-44391e090a1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-86dsq\" (UID: \"ff1a0925-55ac-478f-a400-44391e090a1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.188864 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfwn\" (UniqueName: \"kubernetes.io/projected/ff1a0925-55ac-478f-a400-44391e090a1d-kube-api-access-ldfwn\") pod \"cert-manager-cainjector-855d9ccff4-86dsq\" (UID: \"ff1a0925-55ac-478f-a400-44391e090a1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.290108 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff1a0925-55ac-478f-a400-44391e090a1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-86dsq\" (UID: \"ff1a0925-55ac-478f-a400-44391e090a1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.290189 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfwn\" (UniqueName: \"kubernetes.io/projected/ff1a0925-55ac-478f-a400-44391e090a1d-kube-api-access-ldfwn\") pod \"cert-manager-cainjector-855d9ccff4-86dsq\" (UID: \"ff1a0925-55ac-478f-a400-44391e090a1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.311586 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfwn\" (UniqueName: \"kubernetes.io/projected/ff1a0925-55ac-478f-a400-44391e090a1d-kube-api-access-ldfwn\") pod \"cert-manager-cainjector-855d9ccff4-86dsq\" (UID: \"ff1a0925-55ac-478f-a400-44391e090a1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.311775 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff1a0925-55ac-478f-a400-44391e090a1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-86dsq\" (UID: \"ff1a0925-55ac-478f-a400-44391e090a1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.461550 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2m54m" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.471090 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.685857 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-86dsq"] Nov 26 07:14:28 crc kubenswrapper[4909]: W1126 07:14:28.696291 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1a0925_55ac_478f_a400_44391e090a1d.slice/crio-b066660cea0f18ae7554c0182d4c44fe0a0e82f581ea6971a6c199696f009969 WatchSource:0}: Error finding container b066660cea0f18ae7554c0182d4c44fe0a0e82f581ea6971a6c199696f009969: Status 404 returned error can't find the container with id b066660cea0f18ae7554c0182d4c44fe0a0e82f581ea6971a6c199696f009969 Nov 26 07:14:28 crc kubenswrapper[4909]: I1126 07:14:28.869777 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" event={"ID":"ff1a0925-55ac-478f-a400-44391e090a1d","Type":"ContainerStarted","Data":"b066660cea0f18ae7554c0182d4c44fe0a0e82f581ea6971a6c199696f009969"} Nov 26 07:14:33 crc kubenswrapper[4909]: I1126 07:14:33.897930 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" event={"ID":"0fb520c3-031d-4c32-af9e-b4cdb73e4851","Type":"ContainerStarted","Data":"672338cc0e6cbd83d9c476b52778414df662cddd4bf111666b48768e7e9b5379"} Nov 26 07:14:33 crc kubenswrapper[4909]: I1126 07:14:33.898445 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:33 crc kubenswrapper[4909]: I1126 07:14:33.899987 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" event={"ID":"ff1a0925-55ac-478f-a400-44391e090a1d","Type":"ContainerStarted","Data":"c013e6d5ee288db69f4caae45c7cbb840813134bdaa0d5fd1f785824de8e05b1"} Nov 26 07:14:33 crc kubenswrapper[4909]: I1126 07:14:33.920853 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" podStartSLOduration=1.566147739 podStartE2EDuration="8.920815728s" podCreationTimestamp="2025-11-26 07:14:25 +0000 UTC" firstStartedPulling="2025-11-26 07:14:26.229186976 +0000 UTC m=+838.375398142" lastFinishedPulling="2025-11-26 07:14:33.583854965 +0000 UTC m=+845.730066131" observedRunningTime="2025-11-26 07:14:33.914418225 +0000 UTC m=+846.060629401" watchObservedRunningTime="2025-11-26 07:14:33.920815728 +0000 UTC m=+846.067026894" Nov 26 07:14:40 crc kubenswrapper[4909]: I1126 07:14:40.918822 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-bb6kf" Nov 26 07:14:40 crc kubenswrapper[4909]: I1126 07:14:40.943570 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" podStartSLOduration=8.034536414 podStartE2EDuration="12.943549989s" podCreationTimestamp="2025-11-26 07:14:28 +0000 UTC" firstStartedPulling="2025-11-26 07:14:28.699213048 +0000 UTC m=+840.845424214" lastFinishedPulling="2025-11-26 07:14:33.608226623 +0000 UTC m=+845.754437789" observedRunningTime="2025-11-26 07:14:33.933713677 +0000 UTC m=+846.079924833" watchObservedRunningTime="2025-11-26 07:14:40.943549989 +0000 UTC m=+853.089761155" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.726747 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4p4p2"] Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.727828 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.731216 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tdd8v" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.753699 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4p4p2"] Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.803540 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85pkp\" (UniqueName: \"kubernetes.io/projected/ce540878-55f9-495e-8cc1-30402bb55d9f-kube-api-access-85pkp\") pod \"cert-manager-86cb77c54b-4p4p2\" (UID: \"ce540878-55f9-495e-8cc1-30402bb55d9f\") " pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.803675 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce540878-55f9-495e-8cc1-30402bb55d9f-bound-sa-token\") pod \"cert-manager-86cb77c54b-4p4p2\" (UID: \"ce540878-55f9-495e-8cc1-30402bb55d9f\") " pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.905784 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce540878-55f9-495e-8cc1-30402bb55d9f-bound-sa-token\") pod \"cert-manager-86cb77c54b-4p4p2\" (UID: \"ce540878-55f9-495e-8cc1-30402bb55d9f\") " pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.905958 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85pkp\" (UniqueName: \"kubernetes.io/projected/ce540878-55f9-495e-8cc1-30402bb55d9f-kube-api-access-85pkp\") pod \"cert-manager-86cb77c54b-4p4p2\" (UID: \"ce540878-55f9-495e-8cc1-30402bb55d9f\") " pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.924116 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce540878-55f9-495e-8cc1-30402bb55d9f-bound-sa-token\") pod \"cert-manager-86cb77c54b-4p4p2\" (UID: \"ce540878-55f9-495e-8cc1-30402bb55d9f\") " pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:43 crc kubenswrapper[4909]: I1126 07:14:43.924468 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85pkp\" (UniqueName: \"kubernetes.io/projected/ce540878-55f9-495e-8cc1-30402bb55d9f-kube-api-access-85pkp\") pod \"cert-manager-86cb77c54b-4p4p2\" (UID: \"ce540878-55f9-495e-8cc1-30402bb55d9f\") " pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:44 crc kubenswrapper[4909]: I1126 07:14:44.064920 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4p4p2" Nov 26 07:14:44 crc kubenswrapper[4909]: I1126 07:14:44.288011 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4p4p2"] Nov 26 07:14:44 crc kubenswrapper[4909]: W1126 07:14:44.293568 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce540878_55f9_495e_8cc1_30402bb55d9f.slice/crio-07abbeeebf5b429ba08fd5fdddd7acfa1ac71be8d723b0216f5234049777ab76 WatchSource:0}: Error finding container 07abbeeebf5b429ba08fd5fdddd7acfa1ac71be8d723b0216f5234049777ab76: Status 404 returned error can't find the container with id 07abbeeebf5b429ba08fd5fdddd7acfa1ac71be8d723b0216f5234049777ab76 Nov 26 07:14:44 crc kubenswrapper[4909]: I1126 07:14:44.968066 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4p4p2" event={"ID":"ce540878-55f9-495e-8cc1-30402bb55d9f","Type":"ContainerStarted","Data":"8bc6453c4d18ccd3bfefbe19b0c7e26b6c5d86f34b772663446d2795f0cf076f"} Nov 26 07:14:44 crc kubenswrapper[4909]: I1126 07:14:44.968430 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4p4p2" event={"ID":"ce540878-55f9-495e-8cc1-30402bb55d9f","Type":"ContainerStarted","Data":"07abbeeebf5b429ba08fd5fdddd7acfa1ac71be8d723b0216f5234049777ab76"} Nov 26 07:14:44 crc kubenswrapper[4909]: I1126 07:14:44.983655 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-4p4p2" podStartSLOduration=1.983631693 podStartE2EDuration="1.983631693s" podCreationTimestamp="2025-11-26 07:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:14:44.98203935 +0000 UTC m=+857.128250516" watchObservedRunningTime="2025-11-26 07:14:44.983631693 +0000 UTC m=+857.129842859" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.140251 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g4pcs"] Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.141790 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4pcs" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.144129 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.145581 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.152692 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wqp4b" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.162635 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g4pcs"] Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.243272 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rd4b\" (UniqueName: \"kubernetes.io/projected/d18fda01-e7ed-49e9-83e9-a06dea3a3245-kube-api-access-8rd4b\") pod \"openstack-operator-index-g4pcs\" (UID: \"d18fda01-e7ed-49e9-83e9-a06dea3a3245\") " pod="openstack-operators/openstack-operator-index-g4pcs" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.344730 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rd4b\" (UniqueName: \"kubernetes.io/projected/d18fda01-e7ed-49e9-83e9-a06dea3a3245-kube-api-access-8rd4b\") pod \"openstack-operator-index-g4pcs\" (UID: \"d18fda01-e7ed-49e9-83e9-a06dea3a3245\") " pod="openstack-operators/openstack-operator-index-g4pcs" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.363394 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rd4b\" (UniqueName: \"kubernetes.io/projected/d18fda01-e7ed-49e9-83e9-a06dea3a3245-kube-api-access-8rd4b\") pod \"openstack-operator-index-g4pcs\" (UID: \"d18fda01-e7ed-49e9-83e9-a06dea3a3245\") " pod="openstack-operators/openstack-operator-index-g4pcs" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.460063 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4pcs" Nov 26 07:14:54 crc kubenswrapper[4909]: I1126 07:14:54.855391 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g4pcs"] Nov 26 07:14:54 crc kubenswrapper[4909]: W1126 07:14:54.858079 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd18fda01_e7ed_49e9_83e9_a06dea3a3245.slice/crio-66300d5ad5b368a37b9744133de8d2114650518e09b781b1581d6fb6ba50c5cf WatchSource:0}: Error finding container 66300d5ad5b368a37b9744133de8d2114650518e09b781b1581d6fb6ba50c5cf: Status 404 returned error can't find the container with id 66300d5ad5b368a37b9744133de8d2114650518e09b781b1581d6fb6ba50c5cf Nov 26 07:14:55 crc kubenswrapper[4909]: I1126 07:14:55.131570 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4pcs" event={"ID":"d18fda01-e7ed-49e9-83e9-a06dea3a3245","Type":"ContainerStarted","Data":"66300d5ad5b368a37b9744133de8d2114650518e09b781b1581d6fb6ba50c5cf"} Nov 26 07:14:57 crc kubenswrapper[4909]: I1126 07:14:57.147632 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4pcs" event={"ID":"d18fda01-e7ed-49e9-83e9-a06dea3a3245","Type":"ContainerStarted","Data":"55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9"} Nov 26 07:14:57 crc kubenswrapper[4909]: I1126 07:14:57.168738 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g4pcs" podStartSLOduration=1.063360341 podStartE2EDuration="3.168717662s" podCreationTimestamp="2025-11-26 07:14:54 +0000 UTC" firstStartedPulling="2025-11-26 07:14:54.859745464 +0000 UTC m=+867.005956640" lastFinishedPulling="2025-11-26 07:14:56.965102795 +0000 UTC m=+869.111313961" observedRunningTime="2025-11-26 07:14:57.15944374 +0000 UTC m=+869.305654916" watchObservedRunningTime="2025-11-26 07:14:57.168717662 +0000 UTC m=+869.314928848" Nov 26 07:14:57 crc kubenswrapper[4909]: I1126 07:14:57.487728 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-g4pcs"] Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.100540 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nmqqp"] Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.102916 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.103040 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nmqqp"] Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.204544 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6jx\" (UniqueName: \"kubernetes.io/projected/bc7cd522-0eab-4a8a-9146-abdb0d13ed54-kube-api-access-8c6jx\") pod \"openstack-operator-index-nmqqp\" (UID: \"bc7cd522-0eab-4a8a-9146-abdb0d13ed54\") " pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.306254 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6jx\" (UniqueName: \"kubernetes.io/projected/bc7cd522-0eab-4a8a-9146-abdb0d13ed54-kube-api-access-8c6jx\") pod \"openstack-operator-index-nmqqp\" (UID: \"bc7cd522-0eab-4a8a-9146-abdb0d13ed54\") " pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.331778 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6jx\" (UniqueName: \"kubernetes.io/projected/bc7cd522-0eab-4a8a-9146-abdb0d13ed54-kube-api-access-8c6jx\") pod \"openstack-operator-index-nmqqp\" (UID: \"bc7cd522-0eab-4a8a-9146-abdb0d13ed54\") " pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.423315 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:14:58 crc kubenswrapper[4909]: I1126 07:14:58.632803 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nmqqp"] Nov 26 07:14:58 crc kubenswrapper[4909]: W1126 07:14:58.638110 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc7cd522_0eab_4a8a_9146_abdb0d13ed54.slice/crio-0e4221b963da5c72f7d8cea0f91bd4bce99e0df8b2f4df5c2c9839b35dd7c65c WatchSource:0}: Error finding container 0e4221b963da5c72f7d8cea0f91bd4bce99e0df8b2f4df5c2c9839b35dd7c65c: Status 404 returned error can't find the container with id 0e4221b963da5c72f7d8cea0f91bd4bce99e0df8b2f4df5c2c9839b35dd7c65c Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.158947 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nmqqp" event={"ID":"bc7cd522-0eab-4a8a-9146-abdb0d13ed54","Type":"ContainerStarted","Data":"412c0ed142215584c489508a8764b4d6bb953dfe31bef03846a8849cb4497511"} Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.158993 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nmqqp" event={"ID":"bc7cd522-0eab-4a8a-9146-abdb0d13ed54","Type":"ContainerStarted","Data":"0e4221b963da5c72f7d8cea0f91bd4bce99e0df8b2f4df5c2c9839b35dd7c65c"} Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.159048 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-g4pcs" podUID="d18fda01-e7ed-49e9-83e9-a06dea3a3245" containerName="registry-server" containerID="cri-o://55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9" gracePeriod=2 Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.181244 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nmqqp" podStartSLOduration=1.12683771 podStartE2EDuration="1.18121642s" podCreationTimestamp="2025-11-26 07:14:58 +0000 UTC" firstStartedPulling="2025-11-26 07:14:58.642363907 +0000 UTC m=+870.788575063" lastFinishedPulling="2025-11-26 07:14:58.696742597 +0000 UTC m=+870.842953773" observedRunningTime="2025-11-26 07:14:59.177350826 +0000 UTC m=+871.323561992" watchObservedRunningTime="2025-11-26 07:14:59.18121642 +0000 UTC m=+871.327427586" Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.567677 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4pcs" Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.724512 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rd4b\" (UniqueName: \"kubernetes.io/projected/d18fda01-e7ed-49e9-83e9-a06dea3a3245-kube-api-access-8rd4b\") pod \"d18fda01-e7ed-49e9-83e9-a06dea3a3245\" (UID: \"d18fda01-e7ed-49e9-83e9-a06dea3a3245\") " Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.730852 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18fda01-e7ed-49e9-83e9-a06dea3a3245-kube-api-access-8rd4b" (OuterVolumeSpecName: "kube-api-access-8rd4b") pod "d18fda01-e7ed-49e9-83e9-a06dea3a3245" (UID: "d18fda01-e7ed-49e9-83e9-a06dea3a3245"). InnerVolumeSpecName "kube-api-access-8rd4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:14:59 crc kubenswrapper[4909]: I1126 07:14:59.826287 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rd4b\" (UniqueName: \"kubernetes.io/projected/d18fda01-e7ed-49e9-83e9-a06dea3a3245-kube-api-access-8rd4b\") on node \"crc\" DevicePath \"\"" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.134065 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk"] Nov 26 07:15:00 crc kubenswrapper[4909]: E1126 07:15:00.134397 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18fda01-e7ed-49e9-83e9-a06dea3a3245" containerName="registry-server" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.134417 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18fda01-e7ed-49e9-83e9-a06dea3a3245" containerName="registry-server" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.134609 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18fda01-e7ed-49e9-83e9-a06dea3a3245" containerName="registry-server" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.135198 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.137822 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.137940 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.147300 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk"] Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.178547 4909 generic.go:334] "Generic (PLEG): container finished" podID="d18fda01-e7ed-49e9-83e9-a06dea3a3245" containerID="55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9" exitCode=0 Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.179356 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4pcs" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.179468 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4pcs" event={"ID":"d18fda01-e7ed-49e9-83e9-a06dea3a3245","Type":"ContainerDied","Data":"55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9"} Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.179526 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4pcs" event={"ID":"d18fda01-e7ed-49e9-83e9-a06dea3a3245","Type":"ContainerDied","Data":"66300d5ad5b368a37b9744133de8d2114650518e09b781b1581d6fb6ba50c5cf"} Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.179545 4909 scope.go:117] "RemoveContainer" containerID="55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.205122 4909 scope.go:117] "RemoveContainer" containerID="55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9" Nov 26 07:15:00 crc kubenswrapper[4909]: E1126 07:15:00.205471 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9\": container with ID starting with 55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9 not found: ID does not exist" containerID="55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.205505 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9"} err="failed to get container status \"55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9\": rpc error: code = NotFound desc = could not find container \"55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9\": container with ID starting with 55d4d505b7c6e4575d79ac7d020b0c5e39a8ff31237f9f91723e5502665a80f9 not found: ID does not exist" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.207534 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-g4pcs"] Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.211058 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-g4pcs"] Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.232220 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c709137e-6913-47d3-8cbf-3b1ea4c598ef-secret-volume\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.232267 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c709137e-6913-47d3-8cbf-3b1ea4c598ef-config-volume\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.232386 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hb6\" (UniqueName: \"kubernetes.io/projected/c709137e-6913-47d3-8cbf-3b1ea4c598ef-kube-api-access-42hb6\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.334351 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c709137e-6913-47d3-8cbf-3b1ea4c598ef-config-volume\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.334506 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42hb6\" (UniqueName: \"kubernetes.io/projected/c709137e-6913-47d3-8cbf-3b1ea4c598ef-kube-api-access-42hb6\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.335778 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c709137e-6913-47d3-8cbf-3b1ea4c598ef-config-volume\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.335988 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c709137e-6913-47d3-8cbf-3b1ea4c598ef-secret-volume\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.344221 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c709137e-6913-47d3-8cbf-3b1ea4c598ef-secret-volume\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.355496 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42hb6\" (UniqueName: \"kubernetes.io/projected/c709137e-6913-47d3-8cbf-3b1ea4c598ef-kube-api-access-42hb6\") pod \"collect-profiles-29402355-mfkkk\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.454688 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.507405 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18fda01-e7ed-49e9-83e9-a06dea3a3245" path="/var/lib/kubelet/pods/d18fda01-e7ed-49e9-83e9-a06dea3a3245/volumes" Nov 26 07:15:00 crc kubenswrapper[4909]: I1126 07:15:00.632279 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk"] Nov 26 07:15:00 crc kubenswrapper[4909]: W1126 07:15:00.636666 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc709137e_6913_47d3_8cbf_3b1ea4c598ef.slice/crio-e66d4d7d24dccd0606d09e7533372b504ee4fcef813a4e7dd5b40947f5984f06 WatchSource:0}: Error finding container e66d4d7d24dccd0606d09e7533372b504ee4fcef813a4e7dd5b40947f5984f06: Status 404 returned error can't find the container with id e66d4d7d24dccd0606d09e7533372b504ee4fcef813a4e7dd5b40947f5984f06 Nov 26 07:15:01 crc kubenswrapper[4909]: I1126 07:15:01.189409 4909 generic.go:334] "Generic (PLEG): container finished" podID="c709137e-6913-47d3-8cbf-3b1ea4c598ef" containerID="06b6916491440fff175c0fcd648d46fc98acd442352b71d7cb86fcf80ff8af8e" exitCode=0 Nov 26 07:15:01 crc kubenswrapper[4909]: I1126 07:15:01.189959 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" event={"ID":"c709137e-6913-47d3-8cbf-3b1ea4c598ef","Type":"ContainerDied","Data":"06b6916491440fff175c0fcd648d46fc98acd442352b71d7cb86fcf80ff8af8e"} Nov 26 07:15:01 crc kubenswrapper[4909]: I1126 07:15:01.189991 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" event={"ID":"c709137e-6913-47d3-8cbf-3b1ea4c598ef","Type":"ContainerStarted","Data":"e66d4d7d24dccd0606d09e7533372b504ee4fcef813a4e7dd5b40947f5984f06"} Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.410497 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.564695 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c709137e-6913-47d3-8cbf-3b1ea4c598ef-config-volume\") pod \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.564781 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42hb6\" (UniqueName: \"kubernetes.io/projected/c709137e-6913-47d3-8cbf-3b1ea4c598ef-kube-api-access-42hb6\") pod \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.564969 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c709137e-6913-47d3-8cbf-3b1ea4c598ef-secret-volume\") pod \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\" (UID: \"c709137e-6913-47d3-8cbf-3b1ea4c598ef\") " Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.565939 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c709137e-6913-47d3-8cbf-3b1ea4c598ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "c709137e-6913-47d3-8cbf-3b1ea4c598ef" (UID: "c709137e-6913-47d3-8cbf-3b1ea4c598ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.570161 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c709137e-6913-47d3-8cbf-3b1ea4c598ef-kube-api-access-42hb6" (OuterVolumeSpecName: "kube-api-access-42hb6") pod "c709137e-6913-47d3-8cbf-3b1ea4c598ef" (UID: "c709137e-6913-47d3-8cbf-3b1ea4c598ef"). InnerVolumeSpecName "kube-api-access-42hb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.570686 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c709137e-6913-47d3-8cbf-3b1ea4c598ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c709137e-6913-47d3-8cbf-3b1ea4c598ef" (UID: "c709137e-6913-47d3-8cbf-3b1ea4c598ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.666479 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c709137e-6913-47d3-8cbf-3b1ea4c598ef-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.666517 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42hb6\" (UniqueName: \"kubernetes.io/projected/c709137e-6913-47d3-8cbf-3b1ea4c598ef-kube-api-access-42hb6\") on node \"crc\" DevicePath \"\"" Nov 26 07:15:02 crc kubenswrapper[4909]: I1126 07:15:02.666530 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c709137e-6913-47d3-8cbf-3b1ea4c598ef-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:15:03 crc kubenswrapper[4909]: I1126 07:15:03.210897 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" event={"ID":"c709137e-6913-47d3-8cbf-3b1ea4c598ef","Type":"ContainerDied","Data":"e66d4d7d24dccd0606d09e7533372b504ee4fcef813a4e7dd5b40947f5984f06"} Nov 26 07:15:03 crc kubenswrapper[4909]: I1126 07:15:03.211001 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e66d4d7d24dccd0606d09e7533372b504ee4fcef813a4e7dd5b40947f5984f06" Nov 26 07:15:03 crc kubenswrapper[4909]: I1126 07:15:03.211171 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk" Nov 26 07:15:07 crc kubenswrapper[4909]: I1126 07:15:07.301120 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:15:07 crc kubenswrapper[4909]: I1126 07:15:07.301760 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:15:08 crc kubenswrapper[4909]: I1126 07:15:08.424565 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:15:08 crc kubenswrapper[4909]: I1126 07:15:08.424707 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:15:08 crc kubenswrapper[4909]: I1126 07:15:08.468419 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:15:09 crc kubenswrapper[4909]: I1126 07:15:09.283278 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nmqqp" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.757677 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt"] Nov 26 07:15:14 crc kubenswrapper[4909]: E1126 07:15:14.758489 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c709137e-6913-47d3-8cbf-3b1ea4c598ef" containerName="collect-profiles" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.758507 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c709137e-6913-47d3-8cbf-3b1ea4c598ef" containerName="collect-profiles" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.758665 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c709137e-6913-47d3-8cbf-3b1ea4c598ef" containerName="collect-profiles" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.760559 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.761829 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt"] Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.763203 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9j754" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.940061 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-util\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.940151 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmr59\" (UniqueName: \"kubernetes.io/projected/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-kube-api-access-rmr59\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:14 crc kubenswrapper[4909]: I1126 07:15:14.940195 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-bundle\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.041965 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-bundle\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.042202 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-util\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.042256 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmr59\" (UniqueName: \"kubernetes.io/projected/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-kube-api-access-rmr59\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.042440 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-bundle\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.042687 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-util\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.067194 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmr59\" (UniqueName: \"kubernetes.io/projected/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-kube-api-access-rmr59\") pod \"88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.081038 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:15 crc kubenswrapper[4909]: I1126 07:15:15.355573 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt"] Nov 26 07:15:15 crc kubenswrapper[4909]: W1126 07:15:15.364982 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bfe3bfb_ebaf_4bb6_a91d_66d8278dda98.slice/crio-8b7d08311544b98063896014211546b4b3d63b4115f44aaafa5f66f8f69db3bc WatchSource:0}: Error finding container 8b7d08311544b98063896014211546b4b3d63b4115f44aaafa5f66f8f69db3bc: Status 404 returned error can't find the container with id 8b7d08311544b98063896014211546b4b3d63b4115f44aaafa5f66f8f69db3bc Nov 26 07:15:16 crc kubenswrapper[4909]: I1126 07:15:16.307174 4909 generic.go:334] "Generic (PLEG): container finished" podID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerID="9dabba796ebb07fa33945c553b314d7f55d2c5c933444a99b90384e84f72d314" exitCode=0 Nov 26 07:15:16 crc kubenswrapper[4909]: I1126 07:15:16.307275 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" event={"ID":"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98","Type":"ContainerDied","Data":"9dabba796ebb07fa33945c553b314d7f55d2c5c933444a99b90384e84f72d314"} Nov 26 07:15:16 crc kubenswrapper[4909]: I1126 07:15:16.307335 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" event={"ID":"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98","Type":"ContainerStarted","Data":"8b7d08311544b98063896014211546b4b3d63b4115f44aaafa5f66f8f69db3bc"} Nov 26 07:15:17 crc kubenswrapper[4909]: I1126 07:15:17.320413 4909 generic.go:334] "Generic (PLEG): container finished" podID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerID="51ab26f501a71f158fa52c4ba4b9fa14b5bfbac5e895c67b33a4ea796a78d363" exitCode=0 Nov 26 07:15:17 crc kubenswrapper[4909]: I1126 07:15:17.320490 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" event={"ID":"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98","Type":"ContainerDied","Data":"51ab26f501a71f158fa52c4ba4b9fa14b5bfbac5e895c67b33a4ea796a78d363"} Nov 26 07:15:18 crc kubenswrapper[4909]: I1126 07:15:18.329566 4909 generic.go:334] "Generic (PLEG): container finished" podID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerID="8d6838aa2072a65f937b5e338a59ac60b7bdb63678d23969bc1312bef8de451e" exitCode=0 Nov 26 07:15:18 crc kubenswrapper[4909]: I1126 07:15:18.329687 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" event={"ID":"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98","Type":"ContainerDied","Data":"8d6838aa2072a65f937b5e338a59ac60b7bdb63678d23969bc1312bef8de451e"} Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.611660 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.712422 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmr59\" (UniqueName: \"kubernetes.io/projected/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-kube-api-access-rmr59\") pod \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.712522 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-util\") pod \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.712619 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-bundle\") pod \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\" (UID: \"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98\") " Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.714010 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-bundle" (OuterVolumeSpecName: "bundle") pod "9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" (UID: "9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.719355 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-kube-api-access-rmr59" (OuterVolumeSpecName: "kube-api-access-rmr59") pod "9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" (UID: "9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98"). InnerVolumeSpecName "kube-api-access-rmr59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.733687 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-util" (OuterVolumeSpecName: "util") pod "9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" (UID: "9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.814374 4909 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.814424 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmr59\" (UniqueName: \"kubernetes.io/projected/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-kube-api-access-rmr59\") on node \"crc\" DevicePath \"\"" Nov 26 07:15:19 crc kubenswrapper[4909]: I1126 07:15:19.814482 4909 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98-util\") on node \"crc\" DevicePath \"\"" Nov 26 07:15:20 crc kubenswrapper[4909]: I1126 07:15:20.347138 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" event={"ID":"9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98","Type":"ContainerDied","Data":"8b7d08311544b98063896014211546b4b3d63b4115f44aaafa5f66f8f69db3bc"} Nov 26 07:15:20 crc kubenswrapper[4909]: I1126 07:15:20.347204 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b7d08311544b98063896014211546b4b3d63b4115f44aaafa5f66f8f69db3bc" Nov 26 07:15:20 crc kubenswrapper[4909]: I1126 07:15:20.347218 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.954417 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv"] Nov 26 07:15:27 crc kubenswrapper[4909]: E1126 07:15:27.955022 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerName="extract" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.955033 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerName="extract" Nov 26 07:15:27 crc kubenswrapper[4909]: E1126 07:15:27.955044 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerName="pull" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.955050 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerName="pull" Nov 26 07:15:27 crc kubenswrapper[4909]: E1126 07:15:27.955060 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerName="util" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.955067 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerName="util" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.955171 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98" containerName="extract" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.955788 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.957831 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-fs9fb" Nov 26 07:15:27 crc kubenswrapper[4909]: I1126 07:15:27.976353 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv"] Nov 26 07:15:28 crc kubenswrapper[4909]: I1126 07:15:28.126435 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfk8\" (UniqueName: \"kubernetes.io/projected/dd0d0446-c640-42e7-9ff6-e71e59e4a459-kube-api-access-wbfk8\") pod \"openstack-operator-controller-operator-6c945fd485-mgkgv\" (UID: \"dd0d0446-c640-42e7-9ff6-e71e59e4a459\") " pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:15:28 crc kubenswrapper[4909]: I1126 07:15:28.227983 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbfk8\" (UniqueName: \"kubernetes.io/projected/dd0d0446-c640-42e7-9ff6-e71e59e4a459-kube-api-access-wbfk8\") pod \"openstack-operator-controller-operator-6c945fd485-mgkgv\" (UID: \"dd0d0446-c640-42e7-9ff6-e71e59e4a459\") " pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:15:28 crc kubenswrapper[4909]: I1126 07:15:28.246939 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbfk8\" (UniqueName: \"kubernetes.io/projected/dd0d0446-c640-42e7-9ff6-e71e59e4a459-kube-api-access-wbfk8\") pod \"openstack-operator-controller-operator-6c945fd485-mgkgv\" (UID: \"dd0d0446-c640-42e7-9ff6-e71e59e4a459\") " pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:15:28 crc kubenswrapper[4909]: I1126 07:15:28.271441 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:15:28 crc kubenswrapper[4909]: I1126 07:15:28.722357 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv"] Nov 26 07:15:29 crc kubenswrapper[4909]: I1126 07:15:29.438664 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" event={"ID":"dd0d0446-c640-42e7-9ff6-e71e59e4a459","Type":"ContainerStarted","Data":"87bfe396846513d7c03f80fc5ab5441c2ad8ada40d037c3772894153b5df034d"} Nov 26 07:15:32 crc kubenswrapper[4909]: I1126 07:15:32.459464 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" event={"ID":"dd0d0446-c640-42e7-9ff6-e71e59e4a459","Type":"ContainerStarted","Data":"e702e5ed145f6afda1182310485b2ce992e5e685fe432b41ddc1a904abe6bfc4"} Nov 26 07:15:35 crc kubenswrapper[4909]: I1126 07:15:35.481484 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" event={"ID":"dd0d0446-c640-42e7-9ff6-e71e59e4a459","Type":"ContainerStarted","Data":"7bb2eb7e8e5ab0bd46e57420785b2e03ce205583fd20f35cae0e160cf3c1c3c2"} Nov 26 07:15:35 crc kubenswrapper[4909]: I1126 07:15:35.481885 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:15:35 crc kubenswrapper[4909]: I1126 07:15:35.506924 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" podStartSLOduration=2.788821836 podStartE2EDuration="8.506908423s" podCreationTimestamp="2025-11-26 07:15:27 +0000 UTC" firstStartedPulling="2025-11-26 07:15:28.739632161 +0000 UTC m=+900.885843327" lastFinishedPulling="2025-11-26 07:15:34.457718748 +0000 UTC m=+906.603929914" observedRunningTime="2025-11-26 07:15:35.506240486 +0000 UTC m=+907.652451662" watchObservedRunningTime="2025-11-26 07:15:35.506908423 +0000 UTC m=+907.653119589" Nov 26 07:15:37 crc kubenswrapper[4909]: I1126 07:15:37.301517 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:15:37 crc kubenswrapper[4909]: I1126 07:15:37.301898 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:15:38 crc kubenswrapper[4909]: I1126 07:15:38.274279 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.300842 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.301470 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.301526 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.302279 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e56f7341ca39cd31863ba15982a9b0b7165f8ceb520eefc8a8ee6734e9f64390"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.302363 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://e56f7341ca39cd31863ba15982a9b0b7165f8ceb520eefc8a8ee6734e9f64390" gracePeriod=600 Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.681712 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="e56f7341ca39cd31863ba15982a9b0b7165f8ceb520eefc8a8ee6734e9f64390" exitCode=0 Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.681754 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"e56f7341ca39cd31863ba15982a9b0b7165f8ceb520eefc8a8ee6734e9f64390"} Nov 26 07:16:07 crc kubenswrapper[4909]: I1126 07:16:07.681784 4909 scope.go:117] "RemoveContainer" containerID="d4c41dceb1d36b1bd9dc641bc254e711b1901f8039253b64ac1c47b6d8051be7" Nov 26 07:16:08 crc kubenswrapper[4909]: I1126 07:16:08.689341 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"ca3ef3a41105acb11f5f44a2c705bdfdb176056de55b77bbb72e7524ee7071fd"} Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.603009 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.604692 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.606924 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xs58n" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.612395 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.621357 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.622594 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.625864 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sbqbv" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.637001 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.650918 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.651923 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.659058 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-l2fc5" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.661813 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.663139 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.664028 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82p7\" (UniqueName: \"kubernetes.io/projected/138eaa02-be79-4e16-8627-cc582d5b6770-kube-api-access-p82p7\") pod \"cinder-operator-controller-manager-748967c98-2x9sp\" (UID: \"138eaa02-be79-4e16-8627-cc582d5b6770\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.664093 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6h2\" (UniqueName: \"kubernetes.io/projected/f7f77917-da54-4e82-a356-80000a53395a-kube-api-access-gx6h2\") pod \"barbican-operator-controller-manager-5bfbbb859d-2cwgh\" (UID: \"f7f77917-da54-4e82-a356-80000a53395a\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.664151 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwwd\" (UniqueName: \"kubernetes.io/projected/b3ca7f6d-4dba-4e22-ae42-f4184932fba2-kube-api-access-fdwwd\") pod \"designate-operator-controller-manager-6788cc6d75-scqbd\" (UID: \"b3ca7f6d-4dba-4e22-ae42-f4184932fba2\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.665257 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zxnmq" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.667853 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.668978 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.671865 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8h9nj" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.685678 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.699658 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.714774 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.721702 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.722861 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.726264 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-v7qjq" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.736442 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.742603 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.743649 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.745818 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.748517 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9q9lk" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.755114 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.764654 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-8486p"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.765786 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.766850 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82p7\" (UniqueName: \"kubernetes.io/projected/138eaa02-be79-4e16-8627-cc582d5b6770-kube-api-access-p82p7\") pod \"cinder-operator-controller-manager-748967c98-2x9sp\" (UID: \"138eaa02-be79-4e16-8627-cc582d5b6770\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.766894 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-cert\") pod \"infra-operator-controller-manager-577c5f6d94-d44wm\" (UID: \"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.766921 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6h2\" (UniqueName: \"kubernetes.io/projected/f7f77917-da54-4e82-a356-80000a53395a-kube-api-access-gx6h2\") pod \"barbican-operator-controller-manager-5bfbbb859d-2cwgh\" (UID: \"f7f77917-da54-4e82-a356-80000a53395a\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.766948 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j7gq\" (UniqueName: \"kubernetes.io/projected/0ebad6d0-e522-4012-869e-903c89bd1703-kube-api-access-6j7gq\") pod \"horizon-operator-controller-manager-7d5d9fd47f-sphql\" (UID: \"0ebad6d0-e522-4012-869e-903c89bd1703\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.766976 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqcbj\" (UniqueName: \"kubernetes.io/projected/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-kube-api-access-rqcbj\") pod \"infra-operator-controller-manager-577c5f6d94-d44wm\" (UID: \"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.766998 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwwd\" (UniqueName: \"kubernetes.io/projected/b3ca7f6d-4dba-4e22-ae42-f4184932fba2-kube-api-access-fdwwd\") pod \"designate-operator-controller-manager-6788cc6d75-scqbd\" (UID: \"b3ca7f6d-4dba-4e22-ae42-f4184932fba2\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.767020 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6bl\" (UniqueName: \"kubernetes.io/projected/cd83d237-7922-4458-9fce-8c296d0ccc0f-kube-api-access-fn6bl\") pod \"glance-operator-controller-manager-6bd966bbd4-6j4kw\" (UID: \"cd83d237-7922-4458-9fce-8c296d0ccc0f\") " pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.767038 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68stn\" (UniqueName: \"kubernetes.io/projected/f4c87de0-5b1c-44f8-a2fb-1949a3f4af03-kube-api-access-68stn\") pod \"heat-operator-controller-manager-698d6fd7d6-692sc\" (UID: \"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.777665 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-trskf" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.785896 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.787049 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.792505 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwwd\" (UniqueName: \"kubernetes.io/projected/b3ca7f6d-4dba-4e22-ae42-f4184932fba2-kube-api-access-fdwwd\") pod \"designate-operator-controller-manager-6788cc6d75-scqbd\" (UID: \"b3ca7f6d-4dba-4e22-ae42-f4184932fba2\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.793241 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82p7\" (UniqueName: \"kubernetes.io/projected/138eaa02-be79-4e16-8627-cc582d5b6770-kube-api-access-p82p7\") pod \"cinder-operator-controller-manager-748967c98-2x9sp\" (UID: \"138eaa02-be79-4e16-8627-cc582d5b6770\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.801984 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-txg7z" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.841329 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6h2\" (UniqueName: \"kubernetes.io/projected/f7f77917-da54-4e82-a356-80000a53395a-kube-api-access-gx6h2\") pod \"barbican-operator-controller-manager-5bfbbb859d-2cwgh\" (UID: \"f7f77917-da54-4e82-a356-80000a53395a\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.867702 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-8486p"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.871037 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-cert\") pod \"infra-operator-controller-manager-577c5f6d94-d44wm\" (UID: \"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.871390 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7gq\" (UniqueName: \"kubernetes.io/projected/0ebad6d0-e522-4012-869e-903c89bd1703-kube-api-access-6j7gq\") pod \"horizon-operator-controller-manager-7d5d9fd47f-sphql\" (UID: \"0ebad6d0-e522-4012-869e-903c89bd1703\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.871544 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqcbj\" (UniqueName: \"kubernetes.io/projected/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-kube-api-access-rqcbj\") pod \"infra-operator-controller-manager-577c5f6d94-d44wm\" (UID: \"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.871767 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn6bl\" (UniqueName: \"kubernetes.io/projected/cd83d237-7922-4458-9fce-8c296d0ccc0f-kube-api-access-fn6bl\") pod \"glance-operator-controller-manager-6bd966bbd4-6j4kw\" (UID: \"cd83d237-7922-4458-9fce-8c296d0ccc0f\") " pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.871892 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68stn\" (UniqueName: \"kubernetes.io/projected/f4c87de0-5b1c-44f8-a2fb-1949a3f4af03-kube-api-access-68stn\") pod \"heat-operator-controller-manager-698d6fd7d6-692sc\" (UID: \"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:16:11 crc kubenswrapper[4909]: E1126 07:16:11.871275 4909 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 26 07:16:11 crc kubenswrapper[4909]: E1126 07:16:11.872082 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-cert podName:ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4 nodeName:}" failed. No retries permitted until 2025-11-26 07:16:12.3720567 +0000 UTC m=+944.518267866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-cert") pod "infra-operator-controller-manager-577c5f6d94-d44wm" (UID: "ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4") : secret "infra-operator-webhook-server-cert" not found Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.896413 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68stn\" (UniqueName: \"kubernetes.io/projected/f4c87de0-5b1c-44f8-a2fb-1949a3f4af03-kube-api-access-68stn\") pod \"heat-operator-controller-manager-698d6fd7d6-692sc\" (UID: \"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.906135 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn6bl\" (UniqueName: \"kubernetes.io/projected/cd83d237-7922-4458-9fce-8c296d0ccc0f-kube-api-access-fn6bl\") pod \"glance-operator-controller-manager-6bd966bbd4-6j4kw\" (UID: \"cd83d237-7922-4458-9fce-8c296d0ccc0f\") " pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.907095 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.908125 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqcbj\" (UniqueName: \"kubernetes.io/projected/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-kube-api-access-rqcbj\") pod \"infra-operator-controller-manager-577c5f6d94-d44wm\" (UID: \"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.909549 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7gq\" (UniqueName: \"kubernetes.io/projected/0ebad6d0-e522-4012-869e-903c89bd1703-kube-api-access-6j7gq\") pod \"horizon-operator-controller-manager-7d5d9fd47f-sphql\" (UID: \"0ebad6d0-e522-4012-869e-903c89bd1703\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.922479 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-phclr"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.923859 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.928874 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-k6xs8" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.933106 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.939008 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.940683 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-phclr"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.973309 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.974317 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zbs\" (UniqueName: \"kubernetes.io/projected/8c9c6404-9f47-434c-ac1b-d08cd48d5156-kube-api-access-g5zbs\") pod \"ironic-operator-controller-manager-54485f899-8486p\" (UID: \"8c9c6404-9f47-434c-ac1b-d08cd48d5156\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.974377 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqsr\" (UniqueName: \"kubernetes.io/projected/757566f7-a07b-4623-8668-b39f715ea7a9-kube-api-access-8hqsr\") pod \"keystone-operator-controller-manager-7d6f5d799-4gr4q\" (UID: \"757566f7-a07b-4623-8668-b39f715ea7a9\") " pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.974675 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.976114 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.979041 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tnxdb" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.983876 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.988757 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.991579 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84"] Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.993000 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:16:11 crc kubenswrapper[4909]: I1126 07:16:11.995205 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xs9zf" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.003435 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.004212 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.005315 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.008964 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mbnf5" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.009326 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.023681 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.028092 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.029243 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.032676 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tpq8q" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.035919 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.041915 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.042944 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.045750 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.045947 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-c7gpv" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.050940 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.052098 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.053425 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.062379 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.067022 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tnrk4" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.073516 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.075333 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zbs\" (UniqueName: \"kubernetes.io/projected/8c9c6404-9f47-434c-ac1b-d08cd48d5156-kube-api-access-g5zbs\") pod \"ironic-operator-controller-manager-54485f899-8486p\" (UID: \"8c9c6404-9f47-434c-ac1b-d08cd48d5156\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.075383 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqsr\" (UniqueName: \"kubernetes.io/projected/757566f7-a07b-4623-8668-b39f715ea7a9-kube-api-access-8hqsr\") pod \"keystone-operator-controller-manager-7d6f5d799-4gr4q\" (UID: \"757566f7-a07b-4623-8668-b39f715ea7a9\") " pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.075433 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sq9\" (UniqueName: \"kubernetes.io/projected/9f41a032-71ff-4608-aa2c-b16469fe55a0-kube-api-access-z5sq9\") pod \"manila-operator-controller-manager-646fd589f9-phclr\" (UID: \"9f41a032-71ff-4608-aa2c-b16469fe55a0\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.077574 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.087393 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.091087 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nhlth" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.103908 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.106236 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.108776 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.110138 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-hrbnn" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.117384 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.125890 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqsr\" (UniqueName: \"kubernetes.io/projected/757566f7-a07b-4623-8668-b39f715ea7a9-kube-api-access-8hqsr\") pod \"keystone-operator-controller-manager-7d6f5d799-4gr4q\" (UID: \"757566f7-a07b-4623-8668-b39f715ea7a9\") " pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.126628 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zbs\" (UniqueName: \"kubernetes.io/projected/8c9c6404-9f47-434c-ac1b-d08cd48d5156-kube-api-access-g5zbs\") pod \"ironic-operator-controller-manager-54485f899-8486p\" (UID: \"8c9c6404-9f47-434c-ac1b-d08cd48d5156\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.131827 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.132887 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.134975 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-n49jc" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.148382 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.171820 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.176766 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmpc\" (UniqueName: \"kubernetes.io/projected/61289245-0b12-4689-8a98-2b24544cacf8-kube-api-access-9hmpc\") pod \"octavia-operator-controller-manager-7979c68bc7-c696l\" (UID: \"61289245-0b12-4689-8a98-2b24544cacf8\") " pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.176856 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.176895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8rdj\" (UniqueName: \"kubernetes.io/projected/cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef-kube-api-access-q8rdj\") pod \"mariadb-operator-controller-manager-64d7c556cd-872rr\" (UID: \"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.176941 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h884\" (UniqueName: \"kubernetes.io/projected/4a162aeb-8377-45aa-bd44-6b8aed2f93fb-kube-api-access-2h884\") pod \"nova-operator-controller-manager-79d658b66d-swdlm\" (UID: \"4a162aeb-8377-45aa-bd44-6b8aed2f93fb\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.176994 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w967t\" (UniqueName: \"kubernetes.io/projected/cad0b373-54da-4331-aa01-27d08edaa1ef-kube-api-access-w967t\") pod \"ovn-operator-controller-manager-5b67cfc8fb-xcrzl\" (UID: \"cad0b373-54da-4331-aa01-27d08edaa1ef\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.177026 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4jwp\" (UniqueName: \"kubernetes.io/projected/b68371f8-f38e-44e5-bd68-d059f1e3e89a-kube-api-access-q4jwp\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.177094 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sq9\" (UniqueName: \"kubernetes.io/projected/9f41a032-71ff-4608-aa2c-b16469fe55a0-kube-api-access-z5sq9\") pod \"manila-operator-controller-manager-646fd589f9-phclr\" (UID: \"9f41a032-71ff-4608-aa2c-b16469fe55a0\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.177135 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdssn\" (UniqueName: \"kubernetes.io/projected/af4a09dd-04e0-465d-a817-bacf1a52babe-kube-api-access-gdssn\") pod \"neutron-operator-controller-manager-6b6c55ffd5-dhp84\" (UID: \"af4a09dd-04e0-465d-a817-bacf1a52babe\") " pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.201137 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.208472 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.210496 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sq9\" (UniqueName: \"kubernetes.io/projected/9f41a032-71ff-4608-aa2c-b16469fe55a0-kube-api-access-z5sq9\") pod \"manila-operator-controller-manager-646fd589f9-phclr\" (UID: \"9f41a032-71ff-4608-aa2c-b16469fe55a0\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.214391 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dl4bs" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.216703 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.251795 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.255191 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.260048 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f47fc" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282264 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvlsn\" (UniqueName: \"kubernetes.io/projected/f8afd5eb-02e8-4a94-be0d-19a709270945-kube-api-access-pvlsn\") pod \"telemetry-operator-controller-manager-58487d9bf4-7rjcs\" (UID: \"f8afd5eb-02e8-4a94-be0d-19a709270945\") " pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282306 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wjr4\" (UniqueName: \"kubernetes.io/projected/10e6987e-11d4-4c64-bc26-bb45590f3fff-kube-api-access-9wjr4\") pod \"placement-operator-controller-manager-867d87977b-5t9sx\" (UID: \"10e6987e-11d4-4c64-bc26-bb45590f3fff\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282341 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdssn\" (UniqueName: \"kubernetes.io/projected/af4a09dd-04e0-465d-a817-bacf1a52babe-kube-api-access-gdssn\") pod \"neutron-operator-controller-manager-6b6c55ffd5-dhp84\" (UID: \"af4a09dd-04e0-465d-a817-bacf1a52babe\") " pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282361 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmpc\" (UniqueName: \"kubernetes.io/projected/61289245-0b12-4689-8a98-2b24544cacf8-kube-api-access-9hmpc\") pod \"octavia-operator-controller-manager-7979c68bc7-c696l\" (UID: \"61289245-0b12-4689-8a98-2b24544cacf8\") " pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282385 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282405 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8rdj\" (UniqueName: \"kubernetes.io/projected/cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef-kube-api-access-q8rdj\") pod \"mariadb-operator-controller-manager-64d7c556cd-872rr\" (UID: \"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282425 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h884\" (UniqueName: \"kubernetes.io/projected/4a162aeb-8377-45aa-bd44-6b8aed2f93fb-kube-api-access-2h884\") pod \"nova-operator-controller-manager-79d658b66d-swdlm\" (UID: \"4a162aeb-8377-45aa-bd44-6b8aed2f93fb\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282461 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w967t\" (UniqueName: \"kubernetes.io/projected/cad0b373-54da-4331-aa01-27d08edaa1ef-kube-api-access-w967t\") pod \"ovn-operator-controller-manager-5b67cfc8fb-xcrzl\" (UID: \"cad0b373-54da-4331-aa01-27d08edaa1ef\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282484 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4jwp\" (UniqueName: \"kubernetes.io/projected/b68371f8-f38e-44e5-bd68-d059f1e3e89a-kube-api-access-q4jwp\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.282505 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc42w\" (UniqueName: \"kubernetes.io/projected/5b985112-f6b3-4879-b02e-8ac0e510730b-kube-api-access-rc42w\") pod \"swift-operator-controller-manager-cc9f5bc5c-kbwpk\" (UID: \"5b985112-f6b3-4879-b02e-8ac0e510730b\") " pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.307629 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.316850 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h884\" (UniqueName: \"kubernetes.io/projected/4a162aeb-8377-45aa-bd44-6b8aed2f93fb-kube-api-access-2h884\") pod \"nova-operator-controller-manager-79d658b66d-swdlm\" (UID: \"4a162aeb-8377-45aa-bd44-6b8aed2f93fb\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:16:12 crc kubenswrapper[4909]: E1126 07:16:12.289529 4909 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 07:16:12 crc kubenswrapper[4909]: E1126 07:16:12.337610 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert podName:b68371f8-f38e-44e5-bd68-d059f1e3e89a nodeName:}" failed. No retries permitted until 2025-11-26 07:16:12.83755649 +0000 UTC m=+944.983767656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert") pod "openstack-baremetal-operator-controller-manager-77868f484-kdp8v" (UID: "b68371f8-f38e-44e5-bd68-d059f1e3e89a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.339141 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w967t\" (UniqueName: \"kubernetes.io/projected/cad0b373-54da-4331-aa01-27d08edaa1ef-kube-api-access-w967t\") pod \"ovn-operator-controller-manager-5b67cfc8fb-xcrzl\" (UID: \"cad0b373-54da-4331-aa01-27d08edaa1ef\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.350479 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8rdj\" (UniqueName: \"kubernetes.io/projected/cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef-kube-api-access-q8rdj\") pod \"mariadb-operator-controller-manager-64d7c556cd-872rr\" (UID: \"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.353335 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmpc\" (UniqueName: \"kubernetes.io/projected/61289245-0b12-4689-8a98-2b24544cacf8-kube-api-access-9hmpc\") pod \"octavia-operator-controller-manager-7979c68bc7-c696l\" (UID: \"61289245-0b12-4689-8a98-2b24544cacf8\") " pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.362572 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdssn\" (UniqueName: \"kubernetes.io/projected/af4a09dd-04e0-465d-a817-bacf1a52babe-kube-api-access-gdssn\") pod \"neutron-operator-controller-manager-6b6c55ffd5-dhp84\" (UID: \"af4a09dd-04e0-465d-a817-bacf1a52babe\") " pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.370369 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4jwp\" (UniqueName: \"kubernetes.io/projected/b68371f8-f38e-44e5-bd68-d059f1e3e89a-kube-api-access-q4jwp\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.384302 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.389109 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wjr4\" (UniqueName: \"kubernetes.io/projected/10e6987e-11d4-4c64-bc26-bb45590f3fff-kube-api-access-9wjr4\") pod \"placement-operator-controller-manager-867d87977b-5t9sx\" (UID: \"10e6987e-11d4-4c64-bc26-bb45590f3fff\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.389205 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-cert\") pod \"infra-operator-controller-manager-577c5f6d94-d44wm\" (UID: \"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.389232 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ktct\" (UniqueName: \"kubernetes.io/projected/0f99fe6f-9209-4c74-9bcb-619212d7812e-kube-api-access-9ktct\") pod \"watcher-operator-controller-manager-6b56b8849f-fd6dq\" (UID: \"0f99fe6f-9209-4c74-9bcb-619212d7812e\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.389255 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j74x\" (UniqueName: \"kubernetes.io/projected/365248fc-0b34-46df-bbdc-043f89694812-kube-api-access-7j74x\") pod \"test-operator-controller-manager-77db6bf9c-bz9j9\" (UID: \"365248fc-0b34-46df-bbdc-043f89694812\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.389290 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc42w\" (UniqueName: \"kubernetes.io/projected/5b985112-f6b3-4879-b02e-8ac0e510730b-kube-api-access-rc42w\") pod \"swift-operator-controller-manager-cc9f5bc5c-kbwpk\" (UID: \"5b985112-f6b3-4879-b02e-8ac0e510730b\") " pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.389330 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvlsn\" (UniqueName: \"kubernetes.io/projected/f8afd5eb-02e8-4a94-be0d-19a709270945-kube-api-access-pvlsn\") pod \"telemetry-operator-controller-manager-58487d9bf4-7rjcs\" (UID: \"f8afd5eb-02e8-4a94-be0d-19a709270945\") " pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.397770 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.402702 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.404026 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4-cert\") pod \"infra-operator-controller-manager-577c5f6d94-d44wm\" (UID: \"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.409818 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.423418 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wjr4\" (UniqueName: \"kubernetes.io/projected/10e6987e-11d4-4c64-bc26-bb45590f3fff-kube-api-access-9wjr4\") pod \"placement-operator-controller-manager-867d87977b-5t9sx\" (UID: \"10e6987e-11d4-4c64-bc26-bb45590f3fff\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.429788 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvlsn\" (UniqueName: \"kubernetes.io/projected/f8afd5eb-02e8-4a94-be0d-19a709270945-kube-api-access-pvlsn\") pod \"telemetry-operator-controller-manager-58487d9bf4-7rjcs\" (UID: \"f8afd5eb-02e8-4a94-be0d-19a709270945\") " pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.430407 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc42w\" (UniqueName: \"kubernetes.io/projected/5b985112-f6b3-4879-b02e-8ac0e510730b-kube-api-access-rc42w\") pod \"swift-operator-controller-manager-cc9f5bc5c-kbwpk\" (UID: \"5b985112-f6b3-4879-b02e-8ac0e510730b\") " pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.464575 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.471814 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.472890 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.475597 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xr9z7" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.475740 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.485346 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.485772 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.490939 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.491726 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.493189 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7kf4x" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.494231 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ktct\" (UniqueName: \"kubernetes.io/projected/0f99fe6f-9209-4c74-9bcb-619212d7812e-kube-api-access-9ktct\") pod \"watcher-operator-controller-manager-6b56b8849f-fd6dq\" (UID: \"0f99fe6f-9209-4c74-9bcb-619212d7812e\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.494271 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j74x\" (UniqueName: \"kubernetes.io/projected/365248fc-0b34-46df-bbdc-043f89694812-kube-api-access-7j74x\") pod \"test-operator-controller-manager-77db6bf9c-bz9j9\" (UID: \"365248fc-0b34-46df-bbdc-043f89694812\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.521847 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.522226 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ktct\" (UniqueName: \"kubernetes.io/projected/0f99fe6f-9209-4c74-9bcb-619212d7812e-kube-api-access-9ktct\") pod \"watcher-operator-controller-manager-6b56b8849f-fd6dq\" (UID: \"0f99fe6f-9209-4c74-9bcb-619212d7812e\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.523795 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j74x\" (UniqueName: \"kubernetes.io/projected/365248fc-0b34-46df-bbdc-043f89694812-kube-api-access-7j74x\") pod \"test-operator-controller-manager-77db6bf9c-bz9j9\" (UID: \"365248fc-0b34-46df-bbdc-043f89694812\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.528101 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.555053 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.565995 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.593484 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.595988 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-cert\") pod \"openstack-operator-controller-manager-68c78b6ff8-dmnlq\" (UID: \"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c\") " pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.596028 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlswz\" (UniqueName: \"kubernetes.io/projected/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-kube-api-access-vlswz\") pod \"openstack-operator-controller-manager-68c78b6ff8-dmnlq\" (UID: \"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c\") " pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.596050 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj8zz\" (UniqueName: \"kubernetes.io/projected/20a1b8f0-7e93-4d4a-b527-7470d128a2bc-kube-api-access-wj8zz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-w69tb\" (UID: \"20a1b8f0-7e93-4d4a-b527-7470d128a2bc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.601494 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.616009 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.632771 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.666776 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.697619 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-cert\") pod \"openstack-operator-controller-manager-68c78b6ff8-dmnlq\" (UID: \"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c\") " pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.697659 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlswz\" (UniqueName: \"kubernetes.io/projected/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-kube-api-access-vlswz\") pod \"openstack-operator-controller-manager-68c78b6ff8-dmnlq\" (UID: \"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c\") " pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.697681 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj8zz\" (UniqueName: \"kubernetes.io/projected/20a1b8f0-7e93-4d4a-b527-7470d128a2bc-kube-api-access-wj8zz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-w69tb\" (UID: \"20a1b8f0-7e93-4d4a-b527-7470d128a2bc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" Nov 26 07:16:12 crc kubenswrapper[4909]: E1126 07:16:12.698293 4909 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 26 07:16:12 crc kubenswrapper[4909]: E1126 07:16:12.699840 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-cert podName:fea4eb2c-ad33-4504-a4e4-8c82875b2d0c nodeName:}" failed. No retries permitted until 2025-11-26 07:16:13.198322809 +0000 UTC m=+945.344533975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-cert") pod "openstack-operator-controller-manager-68c78b6ff8-dmnlq" (UID: "fea4eb2c-ad33-4504-a4e4-8c82875b2d0c") : secret "webhook-server-cert" not found Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.724485 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlswz\" (UniqueName: \"kubernetes.io/projected/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-kube-api-access-vlswz\") pod \"openstack-operator-controller-manager-68c78b6ff8-dmnlq\" (UID: \"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c\") " pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.725557 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj8zz\" (UniqueName: \"kubernetes.io/projected/20a1b8f0-7e93-4d4a-b527-7470d128a2bc-kube-api-access-wj8zz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-w69tb\" (UID: \"20a1b8f0-7e93-4d4a-b527-7470d128a2bc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.747842 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" event={"ID":"f7f77917-da54-4e82-a356-80000a53395a","Type":"ContainerStarted","Data":"2451dd74f9d4bf41c90aa191b3212d93d12cf8d45f81b4a9b5cd58c63828767b"} Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.784542 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.791397 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.799045 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp"] Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.828952 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" Nov 26 07:16:12 crc kubenswrapper[4909]: E1126 07:16:12.901230 4909 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 07:16:12 crc kubenswrapper[4909]: E1126 07:16:12.901293 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert podName:b68371f8-f38e-44e5-bd68-d059f1e3e89a nodeName:}" failed. No retries permitted until 2025-11-26 07:16:13.901274505 +0000 UTC m=+946.047485661 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert") pod "openstack-baremetal-operator-controller-manager-77868f484-kdp8v" (UID: "b68371f8-f38e-44e5-bd68-d059f1e3e89a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 07:16:12 crc kubenswrapper[4909]: I1126 07:16:12.901047 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.125885 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.139936 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.206701 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-cert\") pod \"openstack-operator-controller-manager-68c78b6ff8-dmnlq\" (UID: \"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c\") " pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.211619 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fea4eb2c-ad33-4504-a4e4-8c82875b2d0c-cert\") pod \"openstack-operator-controller-manager-68c78b6ff8-dmnlq\" (UID: \"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c\") " pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.255894 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.268670 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.273726 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a162aeb_8377_45aa_bd44_6b8aed2f93fb.slice/crio-b296f02105fe3aaba5d4a7d40e48585714ef15a76e233220b2e55505cd565349 WatchSource:0}: Error finding container b296f02105fe3aaba5d4a7d40e48585714ef15a76e233220b2e55505cd565349: Status 404 returned error can't find the container with id b296f02105fe3aaba5d4a7d40e48585714ef15a76e233220b2e55505cd565349 Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.283951 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-8486p"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.302796 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c9c6404_9f47_434c_ac1b_d08cd48d5156.slice/crio-30eeb0c6082c7b74573fe97c6f9801d5f28f3a6378743528f83e901e2add8f80 WatchSource:0}: Error finding container 30eeb0c6082c7b74573fe97c6f9801d5f28f3a6378743528f83e901e2add8f80: Status 404 returned error can't find the container with id 30eeb0c6082c7b74573fe97c6f9801d5f28f3a6378743528f83e901e2add8f80 Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.419306 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.453794 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-phclr"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.458467 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61289245_0b12_4689_8a98_2b24544cacf8.slice/crio-0d58a884b63cb4e372f805e311fabb33444a7ec7bd8ac13fe37f1c3b827a6e7a WatchSource:0}: Error finding container 0d58a884b63cb4e372f805e311fabb33444a7ec7bd8ac13fe37f1c3b827a6e7a: Status 404 returned error can't find the container with id 0d58a884b63cb4e372f805e311fabb33444a7ec7bd8ac13fe37f1c3b827a6e7a Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.459567 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.465410 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b985112_f6b3_4879_b02e_8ac0e510730b.slice/crio-529b2a801ddce34e868e1cb7a44efd8e29160163be339951188f472e811c7c40 WatchSource:0}: Error finding container 529b2a801ddce34e868e1cb7a44efd8e29160163be339951188f472e811c7c40: Status 404 returned error can't find the container with id 529b2a801ddce34e868e1cb7a44efd8e29160163be339951188f472e811c7c40 Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.472025 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.475163 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f41a032_71ff_4608_aa2c_b16469fe55a0.slice/crio-51b19d7c4c603387b30735f7ea15fded1b35f1bcd447497ad5cb03f5ed6ce73c WatchSource:0}: Error finding container 51b19d7c4c603387b30735f7ea15fded1b35f1bcd447497ad5cb03f5ed6ce73c: Status 404 returned error can't find the container with id 51b19d7c4c603387b30735f7ea15fded1b35f1bcd447497ad5cb03f5ed6ce73c Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.648288 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.652965 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea4eb2c_ad33_4504_a4e4_8c82875b2d0c.slice/crio-a16392c2e199db0c2ed7b1e4d1779b066c72526f54b40d60bc112e5a683ca681 WatchSource:0}: Error finding container a16392c2e199db0c2ed7b1e4d1779b066c72526f54b40d60bc112e5a683ca681: Status 404 returned error can't find the container with id a16392c2e199db0c2ed7b1e4d1779b066c72526f54b40d60bc112e5a683ca681 Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.756892 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" event={"ID":"757566f7-a07b-4623-8668-b39f715ea7a9","Type":"ContainerStarted","Data":"fd4ca83bfd7e5b63a0291175f6ad0c727af55e673bfcfecb1b652536f27f3bde"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.758703 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" event={"ID":"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c","Type":"ContainerStarted","Data":"a16392c2e199db0c2ed7b1e4d1779b066c72526f54b40d60bc112e5a683ca681"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.761891 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" event={"ID":"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03","Type":"ContainerStarted","Data":"6077c1a93cbb4a95219ca8b053ce6087711c6691deef0507bd4c0e3eb599754d"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.763201 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" event={"ID":"61289245-0b12-4689-8a98-2b24544cacf8","Type":"ContainerStarted","Data":"0d58a884b63cb4e372f805e311fabb33444a7ec7bd8ac13fe37f1c3b827a6e7a"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.764044 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" event={"ID":"b3ca7f6d-4dba-4e22-ae42-f4184932fba2","Type":"ContainerStarted","Data":"634b1abde58b7f4e904f3c1002a8b40f5ccbdcc34e2fc783f0356257fa204f71"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.765499 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" event={"ID":"8c9c6404-9f47-434c-ac1b-d08cd48d5156","Type":"ContainerStarted","Data":"30eeb0c6082c7b74573fe97c6f9801d5f28f3a6378743528f83e901e2add8f80"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.778274 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" event={"ID":"0ebad6d0-e522-4012-869e-903c89bd1703","Type":"ContainerStarted","Data":"3360bbfc49722c24d4a80a4890494fb6aac8ea8a0f5009026c28e861f453a81a"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.780535 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.781498 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" event={"ID":"4a162aeb-8377-45aa-bd44-6b8aed2f93fb","Type":"ContainerStarted","Data":"b296f02105fe3aaba5d4a7d40e48585714ef15a76e233220b2e55505cd565349"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.784756 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerStarted","Data":"51b19d7c4c603387b30735f7ea15fded1b35f1bcd447497ad5cb03f5ed6ce73c"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.788928 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" event={"ID":"cd83d237-7922-4458-9fce-8c296d0ccc0f","Type":"ContainerStarted","Data":"ae6124bacbfaefd3697fe536ecb25d57adb6266f55b8a690a6aec2968252eac7"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.793813 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" event={"ID":"5b985112-f6b3-4879-b02e-8ac0e510730b","Type":"ContainerStarted","Data":"529b2a801ddce34e868e1cb7a44efd8e29160163be339951188f472e811c7c40"} Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.797345 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.805246 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" event={"ID":"138eaa02-be79-4e16-8627-cc582d5b6770","Type":"ContainerStarted","Data":"eed0443e25b393604fdd88333586bcf96c01a24be86b19387c30cb073815e692"} Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.806427 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f99fe6f_9209_4c74_9bcb_619212d7812e.slice/crio-90a4db7a122ad2b13fa8238aa6e22426f85492ce751b3fc97567072848bdaca1 WatchSource:0}: Error finding container 90a4db7a122ad2b13fa8238aa6e22426f85492ce751b3fc97567072848bdaca1: Status 404 returned error can't find the container with id 90a4db7a122ad2b13fa8238aa6e22426f85492ce751b3fc97567072848bdaca1 Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.808642 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8afd5eb_02e8_4a94_be0d_19a709270945.slice/crio-ef1d3a6bd0d78b54a69a3c6a6072ae4eeb3c8b6d947016bc44fb30dfe6bd5f53 WatchSource:0}: Error finding container ef1d3a6bd0d78b54a69a3c6a6072ae4eeb3c8b6d947016bc44fb30dfe6bd5f53: Status 404 returned error can't find the container with id ef1d3a6bd0d78b54a69a3c6a6072ae4eeb3c8b6d947016bc44fb30dfe6bd5f53 Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.809791 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.816121 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf4a09dd_04e0_465d_a817_bacf1a52babe.slice/crio-cdf27b39dd22b2085a984412110712fbad2deaff19830ba7032785c3c6ddd53b WatchSource:0}: Error finding container cdf27b39dd22b2085a984412110712fbad2deaff19830ba7032785c3c6ddd53b: Status 404 returned error can't find the container with id cdf27b39dd22b2085a984412110712fbad2deaff19830ba7032785c3c6ddd53b Nov 26 07:16:13 crc kubenswrapper[4909]: E1126 07:16:13.822495 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8rdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-64d7c556cd-872rr_openstack-operators(cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.824082 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl"] Nov 26 07:16:13 crc kubenswrapper[4909]: E1126 07:16:13.825563 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7j74x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-77db6bf9c-bz9j9_openstack-operators(365248fc-0b34-46df-bbdc-043f89694812): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.828763 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20a1b8f0_7e93_4d4a_b527_7470d128a2bc.slice/crio-78d904116709bdfedc1dc3e293f1949d45cc0ec2f92958b46ea01a6ec7a4371a WatchSource:0}: Error finding container 78d904116709bdfedc1dc3e293f1949d45cc0ec2f92958b46ea01a6ec7a4371a: Status 404 returned error can't find the container with id 78d904116709bdfedc1dc3e293f1949d45cc0ec2f92958b46ea01a6ec7a4371a Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.835777 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84"] Nov 26 07:16:13 crc kubenswrapper[4909]: E1126 07:16:13.841233 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wj8zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-w69tb_openstack-operators(20a1b8f0-7e93-4d4a-b527-7470d128a2bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 07:16:13 crc kubenswrapper[4909]: E1126 07:16:13.842401 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" podUID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" Nov 26 07:16:13 crc kubenswrapper[4909]: E1126 07:16:13.847480 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wjr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-867d87977b-5t9sx_openstack-operators(10e6987e-11d4-4c64-bc26-bb45590f3fff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 07:16:13 crc kubenswrapper[4909]: E1126 07:16:13.847731 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w967t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5b67cfc8fb-xcrzl_openstack-operators(cad0b373-54da-4331-aa01-27d08edaa1ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.851402 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.867888 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb"] Nov 26 07:16:13 crc kubenswrapper[4909]: W1126 07:16:13.873670 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef5bb2b0_bdf7_4b26_9df0_44d9993d02e4.slice/crio-2523291c97bdffa90474b4d2d01422a28609b9b6f8f9c8971e5231322cdf32cd WatchSource:0}: Error finding container 2523291c97bdffa90474b4d2d01422a28609b9b6f8f9c8971e5231322cdf32cd: Status 404 returned error can't find the container with id 2523291c97bdffa90474b4d2d01422a28609b9b6f8f9c8971e5231322cdf32cd Nov 26 07:16:13 crc kubenswrapper[4909]: E1126 07:16:13.875810 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:6f630b256a17a0d40ec49bbf3bfbc65118e712cafea97fb0eee03dbc037d6bf8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqcbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-577c5f6d94-d44wm_openstack-operators(ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.880756 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.891775 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm"] Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.920014 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.925731 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b68371f8-f38e-44e5-bd68-d059f1e3e89a-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-kdp8v\" (UID: \"b68371f8-f38e-44e5-bd68-d059f1e3e89a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:13 crc kubenswrapper[4909]: I1126 07:16:13.944834 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.280308 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.281014 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" podUID="365248fc-0b34-46df-bbdc-043f89694812" Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.304285 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.337315 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.408129 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.644194 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v"] Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.816521 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" event={"ID":"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef","Type":"ContainerStarted","Data":"b651a343f0c30fa7d50894b456a9970ece6c5335bf396d4bb0a29acca4c1013a"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.816560 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" event={"ID":"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef","Type":"ContainerStarted","Data":"73e2a4d5c35fdeeb17f015c177eaf611eb96c73b9fd6dc2095c16a0000bce241"} Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.818323 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.820787 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" event={"ID":"10e6987e-11d4-4c64-bc26-bb45590f3fff","Type":"ContainerStarted","Data":"6eab45ff6969881cb9f64506d918280c410fb8de12a1e73d6389af73c5710155"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.820831 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" event={"ID":"10e6987e-11d4-4c64-bc26-bb45590f3fff","Type":"ContainerStarted","Data":"45657f9047170d39127750538e36b4cd9296965512ea3e48f6bb8694dc23e3a8"} Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.823357 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.834867 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" event={"ID":"0f99fe6f-9209-4c74-9bcb-619212d7812e","Type":"ContainerStarted","Data":"90a4db7a122ad2b13fa8238aa6e22426f85492ce751b3fc97567072848bdaca1"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.840272 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" event={"ID":"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c","Type":"ContainerStarted","Data":"cd403c2403829c6c32c36c6272a1e58f14d661cc5cc1db354ef0db0c50db8787"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.840309 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" event={"ID":"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c","Type":"ContainerStarted","Data":"fbcd16f049222efa2b4c9f8bf3f134bb2ad678d3f8e128aa18e1d7f8322cc466"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.842894 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.873885 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" event={"ID":"cad0b373-54da-4331-aa01-27d08edaa1ef","Type":"ContainerStarted","Data":"84eecc085f61ba715d9d651ccf16d36787ab8b50adf5fe0cb77811defaa07203"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.873932 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" event={"ID":"cad0b373-54da-4331-aa01-27d08edaa1ef","Type":"ContainerStarted","Data":"bc8e9ca2650f6ec5d675d5e9095501209602f7dad531a3772cb0e5a26e100dda"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.878421 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" event={"ID":"b68371f8-f38e-44e5-bd68-d059f1e3e89a","Type":"ContainerStarted","Data":"401dcdd8836de4c7758e3aa363ad1fa9b4f36b7fbfea8f8bc49fe9e1fcd13301"} Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.880411 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.880988 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" podStartSLOduration=2.88096684 podStartE2EDuration="2.88096684s" podCreationTimestamp="2025-11-26 07:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:16:14.879549633 +0000 UTC m=+947.025760799" watchObservedRunningTime="2025-11-26 07:16:14.88096684 +0000 UTC m=+947.027178016" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.892632 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" event={"ID":"af4a09dd-04e0-465d-a817-bacf1a52babe","Type":"ContainerStarted","Data":"cdf27b39dd22b2085a984412110712fbad2deaff19830ba7032785c3c6ddd53b"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.895163 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" event={"ID":"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4","Type":"ContainerStarted","Data":"dd6b02c0b54ec52cbb0c494542360b95c09d80f81a88561ad41be413121a2c7d"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.895190 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" event={"ID":"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4","Type":"ContainerStarted","Data":"2523291c97bdffa90474b4d2d01422a28609b9b6f8f9c8971e5231322cdf32cd"} Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.896545 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:6f630b256a17a0d40ec49bbf3bfbc65118e712cafea97fb0eee03dbc037d6bf8\\\"\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.909013 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" event={"ID":"20a1b8f0-7e93-4d4a-b527-7470d128a2bc","Type":"ContainerStarted","Data":"78d904116709bdfedc1dc3e293f1949d45cc0ec2f92958b46ea01a6ec7a4371a"} Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.910623 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" podUID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.913425 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" event={"ID":"365248fc-0b34-46df-bbdc-043f89694812","Type":"ContainerStarted","Data":"c52d8f2f2979fc753b5fd229ccad20e0efdd11d4ad819150ab645f7c6eb359f4"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.913463 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" event={"ID":"365248fc-0b34-46df-bbdc-043f89694812","Type":"ContainerStarted","Data":"4580b8bc9c0c460c5bbcf7406e1bbf479bfeb17eb776a3b1f6218442cadcfee1"} Nov 26 07:16:14 crc kubenswrapper[4909]: I1126 07:16:14.914723 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" event={"ID":"f8afd5eb-02e8-4a94-be0d-19a709270945","Type":"ContainerStarted","Data":"ef1d3a6bd0d78b54a69a3c6a6072ae4eeb3c8b6d947016bc44fb30dfe6bd5f53"} Nov 26 07:16:14 crc kubenswrapper[4909]: E1126 07:16:14.915488 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85\\\"\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" podUID="365248fc-0b34-46df-bbdc-043f89694812" Nov 26 07:16:15 crc kubenswrapper[4909]: E1126 07:16:15.923789 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:6f630b256a17a0d40ec49bbf3bfbc65118e712cafea97fb0eee03dbc037d6bf8\\\"\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:16:15 crc kubenswrapper[4909]: E1126 07:16:15.923870 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:16:15 crc kubenswrapper[4909]: E1126 07:16:15.923950 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:16:15 crc kubenswrapper[4909]: E1126 07:16:15.924077 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:16:15 crc kubenswrapper[4909]: E1126 07:16:15.925767 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85\\\"\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" podUID="365248fc-0b34-46df-bbdc-043f89694812" Nov 26 07:16:15 crc kubenswrapper[4909]: E1126 07:16:15.925791 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" podUID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" Nov 26 07:16:23 crc kubenswrapper[4909]: I1126 07:16:23.424723 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.001715 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" event={"ID":"757566f7-a07b-4623-8668-b39f715ea7a9","Type":"ContainerStarted","Data":"d8e79311312214891d8e4375b26e5a78f6dcfccc9c7f94a137bb0e3d16cb2b98"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.007387 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" event={"ID":"0ebad6d0-e522-4012-869e-903c89bd1703","Type":"ContainerStarted","Data":"f252468407ffea2990ccc044949fabec74a0e6724982ea1fc7ab61ae0f7bdaf7"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.012224 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" event={"ID":"4a162aeb-8377-45aa-bd44-6b8aed2f93fb","Type":"ContainerStarted","Data":"cd27919792f3030a6a04f85adb2e8b5fcd9101798b4d76c73155f3cf47c86a39"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.024367 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" event={"ID":"b68371f8-f38e-44e5-bd68-d059f1e3e89a","Type":"ContainerStarted","Data":"d1d3e23298dde5a2e483e661ac9cff048ea771648ac026f6f3f9e06b131b6457"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.024413 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" event={"ID":"b68371f8-f38e-44e5-bd68-d059f1e3e89a","Type":"ContainerStarted","Data":"cd38a99f92bc1ea46612d73d030c2dc68d5d425136395d9f7c844194b70d0b7c"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.025341 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.026957 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" event={"ID":"0f99fe6f-9209-4c74-9bcb-619212d7812e","Type":"ContainerStarted","Data":"c8a0f7ae5d327772206d173d6e51cf5b5530156a48b967a6ab19f4dd4468562b"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.026990 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" event={"ID":"0f99fe6f-9209-4c74-9bcb-619212d7812e","Type":"ContainerStarted","Data":"aff179b7fdfc67333a63677b6ff434b58e2d436093be19cb0786cbfec335676d"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.027560 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.038889 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" event={"ID":"138eaa02-be79-4e16-8627-cc582d5b6770","Type":"ContainerStarted","Data":"f8a4777f4152d766ae48f3cd62d8def4a6cb40f314da8d0bdafcce54d310c09c"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.038930 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" event={"ID":"138eaa02-be79-4e16-8627-cc582d5b6770","Type":"ContainerStarted","Data":"fd4c4b86e3f3f86cb067ee781caf02d5c897cf7cbba236d11652b50a8feacdc5"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.039040 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.044005 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" event={"ID":"5b985112-f6b3-4879-b02e-8ac0e510730b","Type":"ContainerStarted","Data":"21d251131462e9a3366bb80d7158e64a1cf6baab672bb53fd69665dc35598304"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.044049 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" event={"ID":"5b985112-f6b3-4879-b02e-8ac0e510730b","Type":"ContainerStarted","Data":"ce221dd83457649a5f28dcd0fcc35b3a63fafc0eee1cfe01f7b7e7a20fb689dd"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.044114 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.053098 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerStarted","Data":"ac1a2edc25071651334d0ffbc1843b636e077a8204acf9400bfc1803e4395a58"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.053237 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.061349 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" event={"ID":"af4a09dd-04e0-465d-a817-bacf1a52babe","Type":"ContainerStarted","Data":"bc1d125ffc63dafd4f3c4861ea058ef2d347c94047260544e67516e6c7b32347"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.061651 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.077034 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" event={"ID":"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03","Type":"ContainerStarted","Data":"04cb65e8b18dc7bdf040f74d21ac9f4198eb4fc44f2cf45dfe14eb8552b1ca17"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.080919 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" event={"ID":"f8afd5eb-02e8-4a94-be0d-19a709270945","Type":"ContainerStarted","Data":"c7a8b1902520ca416dbc1a1302a978f3619802384f635376b343e714d0c5fa4d"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.081836 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.088196 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" event={"ID":"b3ca7f6d-4dba-4e22-ae42-f4184932fba2","Type":"ContainerStarted","Data":"e550c2419b5af6d03322135ecf4f934e6214479b59cdbfb10e48825d20a9314c"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.089757 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" event={"ID":"8c9c6404-9f47-434c-ac1b-d08cd48d5156","Type":"ContainerStarted","Data":"fb7999a7f7ff3cd8b133f4854000a9b97fe779663bdbcc7766d0e10a64451e17"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.109656 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" event={"ID":"cd83d237-7922-4458-9fce-8c296d0ccc0f","Type":"ContainerStarted","Data":"80fc53263aa7e40af1c8dcd3f54d8e8ed14e57962477c3678d61494efa132dbd"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.122369 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" event={"ID":"f7f77917-da54-4e82-a356-80000a53395a","Type":"ContainerStarted","Data":"0d44d8328e723a8ecd29d6e033424663a8bd9c80192f9bce84c6db09cce8b051"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.122408 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" event={"ID":"f7f77917-da54-4e82-a356-80000a53395a","Type":"ContainerStarted","Data":"cce41246bda2a4d3cad65ddff80d5ff5faa3e06b26e9cfbf41081fb38b49d0b6"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.122979 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.134770 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" event={"ID":"61289245-0b12-4689-8a98-2b24544cacf8","Type":"ContainerStarted","Data":"5cc094590d1ef22f9ab1f460dcda65bde788d8fb0fad9d35cac512b326d5e61a"} Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.135376 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.178970 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" podStartSLOduration=5.110769496 podStartE2EDuration="13.178950386s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:14.684045957 +0000 UTC m=+946.830257123" lastFinishedPulling="2025-11-26 07:16:22.752226847 +0000 UTC m=+954.898438013" observedRunningTime="2025-11-26 07:16:24.160453501 +0000 UTC m=+956.306664657" watchObservedRunningTime="2025-11-26 07:16:24.178950386 +0000 UTC m=+956.325161552" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.261293 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" podStartSLOduration=4.207605345 podStartE2EDuration="13.261275166s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.811087245 +0000 UTC m=+945.957298411" lastFinishedPulling="2025-11-26 07:16:22.864757066 +0000 UTC m=+955.010968232" observedRunningTime="2025-11-26 07:16:24.257634927 +0000 UTC m=+956.403846093" watchObservedRunningTime="2025-11-26 07:16:24.261275166 +0000 UTC m=+956.407486332" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.403910 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" podStartSLOduration=3.468401961 podStartE2EDuration="12.403893572s" podCreationTimestamp="2025-11-26 07:16:12 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.815991467 +0000 UTC m=+945.962202633" lastFinishedPulling="2025-11-26 07:16:22.751483068 +0000 UTC m=+954.897694244" observedRunningTime="2025-11-26 07:16:24.403133431 +0000 UTC m=+956.549344597" watchObservedRunningTime="2025-11-26 07:16:24.403893572 +0000 UTC m=+956.550104738" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.406267 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" podStartSLOduration=3.349826691 podStartE2EDuration="13.406256756s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:12.699742698 +0000 UTC m=+944.845953864" lastFinishedPulling="2025-11-26 07:16:22.756172753 +0000 UTC m=+954.902383929" observedRunningTime="2025-11-26 07:16:24.34229653 +0000 UTC m=+956.488507696" watchObservedRunningTime="2025-11-26 07:16:24.406256756 +0000 UTC m=+956.552467912" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.476008 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" podStartSLOduration=3.576690986 podStartE2EDuration="13.475992566s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:12.85228005 +0000 UTC m=+944.998491216" lastFinishedPulling="2025-11-26 07:16:22.75158163 +0000 UTC m=+954.897792796" observedRunningTime="2025-11-26 07:16:24.438214823 +0000 UTC m=+956.584425989" watchObservedRunningTime="2025-11-26 07:16:24.475992566 +0000 UTC m=+956.622203732" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.476825 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" podStartSLOduration=4.192660903 podStartE2EDuration="13.476821499s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.467325422 +0000 UTC m=+945.613536578" lastFinishedPulling="2025-11-26 07:16:22.751485978 +0000 UTC m=+954.897697174" observedRunningTime="2025-11-26 07:16:24.470481548 +0000 UTC m=+956.616692704" watchObservedRunningTime="2025-11-26 07:16:24.476821499 +0000 UTC m=+956.623032665" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.601142 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" podStartSLOduration=4.324063579 podStartE2EDuration="13.601127184s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.478919893 +0000 UTC m=+945.625131059" lastFinishedPulling="2025-11-26 07:16:22.755983498 +0000 UTC m=+954.902194664" observedRunningTime="2025-11-26 07:16:24.558339446 +0000 UTC m=+956.704550612" watchObservedRunningTime="2025-11-26 07:16:24.601127184 +0000 UTC m=+956.747338350" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.601472 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" podStartSLOduration=4.668698635 podStartE2EDuration="13.601468243s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.818614587 +0000 UTC m=+945.964825753" lastFinishedPulling="2025-11-26 07:16:22.751384195 +0000 UTC m=+954.897595361" observedRunningTime="2025-11-26 07:16:24.598456032 +0000 UTC m=+956.744667198" watchObservedRunningTime="2025-11-26 07:16:24.601468243 +0000 UTC m=+956.747679409" Nov 26 07:16:24 crc kubenswrapper[4909]: I1126 07:16:24.642033 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" podStartSLOduration=4.343220412 podStartE2EDuration="13.642017961s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.460582461 +0000 UTC m=+945.606793627" lastFinishedPulling="2025-11-26 07:16:22.75938 +0000 UTC m=+954.905591176" observedRunningTime="2025-11-26 07:16:24.64049203 +0000 UTC m=+956.786703216" watchObservedRunningTime="2025-11-26 07:16:24.642017961 +0000 UTC m=+956.788229127" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.143004 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" event={"ID":"757566f7-a07b-4623-8668-b39f715ea7a9","Type":"ContainerStarted","Data":"c44558bf4a6b4b51fd65b53bbbc40ea65949467120959eadf0eda4192ab52874"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.143902 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.145092 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" event={"ID":"0ebad6d0-e522-4012-869e-903c89bd1703","Type":"ContainerStarted","Data":"6fe147cf7c6b9bed67ab7d4ac084a2fc3a2aec27973a713e729f3474a1e51d28"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.145234 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.146758 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" event={"ID":"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03","Type":"ContainerStarted","Data":"6be2873c80fca25fa9b9b6bbea2d0805a3aef30a351038854d77d5c709a74a91"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.146888 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.148505 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" event={"ID":"f8afd5eb-02e8-4a94-be0d-19a709270945","Type":"ContainerStarted","Data":"6e41e8b2b5ca20bc285f25e6af6a15dfbaaee0b1e3884ed569bc5fc02a834258"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.150305 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" event={"ID":"cd83d237-7922-4458-9fce-8c296d0ccc0f","Type":"ContainerStarted","Data":"31378ae871385db3e538b25e5b0baa915b9c94c8b1d018bca2ecac476a31f0aa"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.150447 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.152660 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" event={"ID":"af4a09dd-04e0-465d-a817-bacf1a52babe","Type":"ContainerStarted","Data":"eb13af2b94df8483d079234d74339ed6610f60c2eda731a34572e5a976efb2c2"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.154430 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerStarted","Data":"d6228ca752fb079e84efff19a3bb114c43be4101c72c8014355b2d0b2f13485b"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.156724 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" event={"ID":"61289245-0b12-4689-8a98-2b24544cacf8","Type":"ContainerStarted","Data":"296b295da4347e2dd3e18ae6f17e7dc97283f2733c15ca924b9caa2ce003ce49"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.158821 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" event={"ID":"8c9c6404-9f47-434c-ac1b-d08cd48d5156","Type":"ContainerStarted","Data":"e5ad526f4f2e137845c5e84bd4694ffaa8eb2c47652e1ebc43c4bdf30e9c494f"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.158959 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.160930 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" event={"ID":"4a162aeb-8377-45aa-bd44-6b8aed2f93fb","Type":"ContainerStarted","Data":"0bc7e5904dca41af317b64bdb5f99f30892162db1cdb63b3b28e7b734d01c55d"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.161098 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.162693 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" event={"ID":"b3ca7f6d-4dba-4e22-ae42-f4184932fba2","Type":"ContainerStarted","Data":"f9f5dbe93ac8a80364cae06745d8a400c2c78213d77340cccb63c5ed5cc0a8b6"} Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.163055 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.169917 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" podStartSLOduration=4.649244552 podStartE2EDuration="14.169898404s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.276040829 +0000 UTC m=+945.422251995" lastFinishedPulling="2025-11-26 07:16:22.796694681 +0000 UTC m=+954.942905847" observedRunningTime="2025-11-26 07:16:25.165664741 +0000 UTC m=+957.311875907" watchObservedRunningTime="2025-11-26 07:16:25.169898404 +0000 UTC m=+957.316109580" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.188454 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" podStartSLOduration=4.289774898 podStartE2EDuration="14.188432931s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:12.852626919 +0000 UTC m=+944.998838095" lastFinishedPulling="2025-11-26 07:16:22.751284952 +0000 UTC m=+954.897496128" observedRunningTime="2025-11-26 07:16:25.182180353 +0000 UTC m=+957.328391529" watchObservedRunningTime="2025-11-26 07:16:25.188432931 +0000 UTC m=+957.334644107" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.215384 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" podStartSLOduration=4.6105119519999995 podStartE2EDuration="14.215366673s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.155349841 +0000 UTC m=+945.301561007" lastFinishedPulling="2025-11-26 07:16:22.760204552 +0000 UTC m=+954.906415728" observedRunningTime="2025-11-26 07:16:25.202778116 +0000 UTC m=+957.348989282" watchObservedRunningTime="2025-11-26 07:16:25.215366673 +0000 UTC m=+957.361577839" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.217336 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" podStartSLOduration=4.270502501 podStartE2EDuration="14.217322796s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:12.849014163 +0000 UTC m=+944.995225329" lastFinishedPulling="2025-11-26 07:16:22.795834458 +0000 UTC m=+954.942045624" observedRunningTime="2025-11-26 07:16:25.21334412 +0000 UTC m=+957.359555296" watchObservedRunningTime="2025-11-26 07:16:25.217322796 +0000 UTC m=+957.363533962" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.228688 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" podStartSLOduration=4.752706108 podStartE2EDuration="14.22865291s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.275623168 +0000 UTC m=+945.421834334" lastFinishedPulling="2025-11-26 07:16:22.75156993 +0000 UTC m=+954.897781136" observedRunningTime="2025-11-26 07:16:25.22643128 +0000 UTC m=+957.372642446" watchObservedRunningTime="2025-11-26 07:16:25.22865291 +0000 UTC m=+957.374864076" Nov 26 07:16:25 crc kubenswrapper[4909]: I1126 07:16:25.242541 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" podStartSLOduration=4.626969174 podStartE2EDuration="14.242524512s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.135662683 +0000 UTC m=+945.281873849" lastFinishedPulling="2025-11-26 07:16:22.751217991 +0000 UTC m=+954.897429187" observedRunningTime="2025-11-26 07:16:25.241733941 +0000 UTC m=+957.387945117" watchObservedRunningTime="2025-11-26 07:16:25.242524512 +0000 UTC m=+957.388735688" Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.206468 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" event={"ID":"cad0b373-54da-4331-aa01-27d08edaa1ef","Type":"ContainerStarted","Data":"dead8b7e9834ca28e8c1aa52ec89f6bb684f45f6199fe905ea9ab11bb8c8ae3b"} Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.207196 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.209666 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" event={"ID":"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef","Type":"ContainerStarted","Data":"33910e2e3992da9ec90e83cd704c37836f1b3d4f39dad9abeeee8bf3c0a67373"} Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.210078 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.213666 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" event={"ID":"10e6987e-11d4-4c64-bc26-bb45590f3fff","Type":"ContainerStarted","Data":"38c8800ab7720866cc65609da62060129d0c4d094d0da404966a84a32fa4aa31"} Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.214000 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.227810 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" podStartSLOduration=9.786112655 podStartE2EDuration="19.227789148s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.309729733 +0000 UTC m=+945.455940899" lastFinishedPulling="2025-11-26 07:16:22.751406226 +0000 UTC m=+954.897617392" observedRunningTime="2025-11-26 07:16:25.258974784 +0000 UTC m=+957.405185950" watchObservedRunningTime="2025-11-26 07:16:30.227789148 +0000 UTC m=+962.374000314" Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.228758 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podStartSLOduration=3.343833678 podStartE2EDuration="19.228749984s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.847528672 +0000 UTC m=+945.993739838" lastFinishedPulling="2025-11-26 07:16:29.732444978 +0000 UTC m=+961.878656144" observedRunningTime="2025-11-26 07:16:30.221282443 +0000 UTC m=+962.367493619" watchObservedRunningTime="2025-11-26 07:16:30.228749984 +0000 UTC m=+962.374961150" Nov 26 07:16:30 crc kubenswrapper[4909]: I1126 07:16:30.244853 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podStartSLOduration=3.339186594 podStartE2EDuration="19.244836906s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.847230964 +0000 UTC m=+945.993442130" lastFinishedPulling="2025-11-26 07:16:29.752881286 +0000 UTC m=+961.899092442" observedRunningTime="2025-11-26 07:16:30.238065333 +0000 UTC m=+962.384276499" watchObservedRunningTime="2025-11-26 07:16:30.244836906 +0000 UTC m=+962.391048072" Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.222601 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" event={"ID":"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4","Type":"ContainerStarted","Data":"c776eb9eb4ab8b9b3add0bcaab548f59d41d097d83cec5fde25f6c99843ba162"} Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.223254 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.243488 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podStartSLOduration=3.47920194 podStartE2EDuration="20.243470009s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.875680048 +0000 UTC m=+946.021891214" lastFinishedPulling="2025-11-26 07:16:30.639948117 +0000 UTC m=+962.786159283" observedRunningTime="2025-11-26 07:16:31.240874369 +0000 UTC m=+963.387085555" watchObservedRunningTime="2025-11-26 07:16:31.243470009 +0000 UTC m=+963.389681175" Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.243625 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podStartSLOduration=4.327220913 podStartE2EDuration="20.243618863s" podCreationTimestamp="2025-11-26 07:16:11 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.822363307 +0000 UTC m=+945.968574473" lastFinishedPulling="2025-11-26 07:16:29.738761257 +0000 UTC m=+961.884972423" observedRunningTime="2025-11-26 07:16:30.27406875 +0000 UTC m=+962.420279916" watchObservedRunningTime="2025-11-26 07:16:31.243618863 +0000 UTC m=+963.389830039" Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.935445 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.941150 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.976694 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:16:31 crc kubenswrapper[4909]: I1126 07:16:31.993349 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.007782 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.056342 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.153874 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.316769 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.386933 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.401363 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.413778 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.524926 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.558411 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.604293 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:16:32 crc kubenswrapper[4909]: I1126 07:16:32.635424 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:16:33 crc kubenswrapper[4909]: I1126 07:16:33.234985 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" event={"ID":"20a1b8f0-7e93-4d4a-b527-7470d128a2bc","Type":"ContainerStarted","Data":"79c93578600ecd3758192b8bf7324e13d17a763ba661c201ae2a6a523c5c904a"} Nov 26 07:16:33 crc kubenswrapper[4909]: I1126 07:16:33.237693 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" event={"ID":"365248fc-0b34-46df-bbdc-043f89694812","Type":"ContainerStarted","Data":"2f0c1af130f72b92aaeb799fac7d478ed2dca786ccfc6225c4b0f1a81938746a"} Nov 26 07:16:33 crc kubenswrapper[4909]: I1126 07:16:33.238400 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:16:33 crc kubenswrapper[4909]: I1126 07:16:33.256141 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" podStartSLOduration=2.133215239 podStartE2EDuration="21.256123919s" podCreationTimestamp="2025-11-26 07:16:12 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.84108326 +0000 UTC m=+945.987294426" lastFinishedPulling="2025-11-26 07:16:32.96399194 +0000 UTC m=+965.110203106" observedRunningTime="2025-11-26 07:16:33.249366307 +0000 UTC m=+965.395577473" watchObservedRunningTime="2025-11-26 07:16:33.256123919 +0000 UTC m=+965.402335085" Nov 26 07:16:33 crc kubenswrapper[4909]: I1126 07:16:33.267443 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" podStartSLOduration=2.150107352 podStartE2EDuration="21.267426282s" podCreationTimestamp="2025-11-26 07:16:12 +0000 UTC" firstStartedPulling="2025-11-26 07:16:13.825438251 +0000 UTC m=+945.971649417" lastFinishedPulling="2025-11-26 07:16:32.942757171 +0000 UTC m=+965.088968347" observedRunningTime="2025-11-26 07:16:33.264280758 +0000 UTC m=+965.410491924" watchObservedRunningTime="2025-11-26 07:16:33.267426282 +0000 UTC m=+965.413637448" Nov 26 07:16:33 crc kubenswrapper[4909]: I1126 07:16:33.954815 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:16:42 crc kubenswrapper[4909]: I1126 07:16:42.468268 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:16:42 crc kubenswrapper[4909]: I1126 07:16:42.490725 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:16:42 crc kubenswrapper[4909]: I1126 07:16:42.569179 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:16:42 crc kubenswrapper[4909]: I1126 07:16:42.618717 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:16:42 crc kubenswrapper[4909]: I1126 07:16:42.673572 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.344042 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tl6p"] Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.346251 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.349092 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.349642 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.354354 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tl6p"] Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.357285 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.357315 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-n4lk7" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.392678 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn7xl"] Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.393790 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.396603 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.413110 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn7xl"] Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.445481 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-config\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.445523 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7jk\" (UniqueName: \"kubernetes.io/projected/2187b469-f631-45fe-bbf9-007050d474d2-kube-api-access-jc7jk\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.445545 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e04c2db-e792-4380-ab75-c274d8ef4777-config\") pod \"dnsmasq-dns-675f4bcbfc-8tl6p\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.445779 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp849\" (UniqueName: \"kubernetes.io/projected/7e04c2db-e792-4380-ab75-c274d8ef4777-kube-api-access-mp849\") pod \"dnsmasq-dns-675f4bcbfc-8tl6p\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.445851 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.547777 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp849\" (UniqueName: \"kubernetes.io/projected/7e04c2db-e792-4380-ab75-c274d8ef4777-kube-api-access-mp849\") pod \"dnsmasq-dns-675f4bcbfc-8tl6p\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.547870 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.547956 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-config\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.547982 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7jk\" (UniqueName: \"kubernetes.io/projected/2187b469-f631-45fe-bbf9-007050d474d2-kube-api-access-jc7jk\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.548008 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e04c2db-e792-4380-ab75-c274d8ef4777-config\") pod \"dnsmasq-dns-675f4bcbfc-8tl6p\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.548825 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.549122 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-config\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.549148 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e04c2db-e792-4380-ab75-c274d8ef4777-config\") pod \"dnsmasq-dns-675f4bcbfc-8tl6p\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.567062 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7jk\" (UniqueName: \"kubernetes.io/projected/2187b469-f631-45fe-bbf9-007050d474d2-kube-api-access-jc7jk\") pod \"dnsmasq-dns-78dd6ddcc-pn7xl\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.567209 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp849\" (UniqueName: \"kubernetes.io/projected/7e04c2db-e792-4380-ab75-c274d8ef4777-kube-api-access-mp849\") pod \"dnsmasq-dns-675f4bcbfc-8tl6p\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.667394 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:16:56 crc kubenswrapper[4909]: I1126 07:16:56.707640 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:16:57 crc kubenswrapper[4909]: I1126 07:16:57.116969 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tl6p"] Nov 26 07:16:57 crc kubenswrapper[4909]: W1126 07:16:57.123756 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e04c2db_e792_4380_ab75_c274d8ef4777.slice/crio-74413e12e66d05a61f6e28673c048034c4279b6672bd18947e4e2315d421df7c WatchSource:0}: Error finding container 74413e12e66d05a61f6e28673c048034c4279b6672bd18947e4e2315d421df7c: Status 404 returned error can't find the container with id 74413e12e66d05a61f6e28673c048034c4279b6672bd18947e4e2315d421df7c Nov 26 07:16:57 crc kubenswrapper[4909]: I1126 07:16:57.127250 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 07:16:57 crc kubenswrapper[4909]: I1126 07:16:57.179503 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn7xl"] Nov 26 07:16:57 crc kubenswrapper[4909]: W1126 07:16:57.184882 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2187b469_f631_45fe_bbf9_007050d474d2.slice/crio-c4b74dbc36983646f69bb5dbab467a3c73ed0a78c340c6ed81a45d14a71166a8 WatchSource:0}: Error finding container c4b74dbc36983646f69bb5dbab467a3c73ed0a78c340c6ed81a45d14a71166a8: Status 404 returned error can't find the container with id c4b74dbc36983646f69bb5dbab467a3c73ed0a78c340c6ed81a45d14a71166a8 Nov 26 07:16:57 crc kubenswrapper[4909]: I1126 07:16:57.478620 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" event={"ID":"7e04c2db-e792-4380-ab75-c274d8ef4777","Type":"ContainerStarted","Data":"74413e12e66d05a61f6e28673c048034c4279b6672bd18947e4e2315d421df7c"} Nov 26 07:16:57 crc kubenswrapper[4909]: I1126 07:16:57.479883 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" event={"ID":"2187b469-f631-45fe-bbf9-007050d474d2","Type":"ContainerStarted","Data":"c4b74dbc36983646f69bb5dbab467a3c73ed0a78c340c6ed81a45d14a71166a8"} Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.506886 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tl6p"] Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.539287 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pxwwr"] Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.540545 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.548995 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pxwwr"] Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.677769 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.678073 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-config\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.678169 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvt8\" (UniqueName: \"kubernetes.io/projected/276e634e-b151-474a-8231-481adbdfc0b5-kube-api-access-9qvt8\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.791400 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn7xl"] Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.792007 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-config\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.792067 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvt8\" (UniqueName: \"kubernetes.io/projected/276e634e-b151-474a-8231-481adbdfc0b5-kube-api-access-9qvt8\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.792127 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.793116 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.793564 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-config\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.813111 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2jgxd"] Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.814330 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.826479 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvt8\" (UniqueName: \"kubernetes.io/projected/276e634e-b151-474a-8231-481adbdfc0b5-kube-api-access-9qvt8\") pod \"dnsmasq-dns-666b6646f7-pxwwr\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.827779 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2jgxd"] Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.864200 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.893736 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.893847 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mb2d\" (UniqueName: \"kubernetes.io/projected/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-kube-api-access-7mb2d\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.893900 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-config\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.994745 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.994838 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mb2d\" (UniqueName: \"kubernetes.io/projected/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-kube-api-access-7mb2d\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.994876 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-config\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.995970 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-config\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:58 crc kubenswrapper[4909]: I1126 07:16:58.996541 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.014895 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mb2d\" (UniqueName: \"kubernetes.io/projected/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-kube-api-access-7mb2d\") pod \"dnsmasq-dns-57d769cc4f-2jgxd\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.177174 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.379545 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pxwwr"] Nov 26 07:16:59 crc kubenswrapper[4909]: W1126 07:16:59.404582 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod276e634e_b151_474a_8231_481adbdfc0b5.slice/crio-926957347cf8c5b324ba25f4f805f07625106a9d92421f1a2524bd9275e78560 WatchSource:0}: Error finding container 926957347cf8c5b324ba25f4f805f07625106a9d92421f1a2524bd9275e78560: Status 404 returned error can't find the container with id 926957347cf8c5b324ba25f4f805f07625106a9d92421f1a2524bd9275e78560 Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.496216 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" event={"ID":"276e634e-b151-474a-8231-481adbdfc0b5","Type":"ContainerStarted","Data":"926957347cf8c5b324ba25f4f805f07625106a9d92421f1a2524bd9275e78560"} Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.649545 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2jgxd"] Nov 26 07:16:59 crc kubenswrapper[4909]: W1126 07:16:59.658386 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f53f109_3ddd_4f4d_bb1d_2cba92fdfbee.slice/crio-5ba05713ba7d82d846a9f99251ab24d51faabd3564355ffa505ad98d1cc8cb5a WatchSource:0}: Error finding container 5ba05713ba7d82d846a9f99251ab24d51faabd3564355ffa505ad98d1cc8cb5a: Status 404 returned error can't find the container with id 5ba05713ba7d82d846a9f99251ab24d51faabd3564355ffa505ad98d1cc8cb5a Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.684505 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.685897 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.688061 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.688069 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.690531 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-626gd" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.691306 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.691643 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.692044 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.692182 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.698059 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806141 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806183 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntdk\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-kube-api-access-nntdk\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806212 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806234 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806252 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806314 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e827f391-2fcb-4758-ae5e-deef3c712e53-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806379 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e827f391-2fcb-4758-ae5e-deef3c712e53-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806552 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806672 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806740 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.806778 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907739 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907795 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntdk\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-kube-api-access-nntdk\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907835 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907861 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907883 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907910 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e827f391-2fcb-4758-ae5e-deef3c712e53-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907938 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e827f391-2fcb-4758-ae5e-deef3c712e53-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.907995 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.908024 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.908062 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.908085 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.908207 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.908693 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.909296 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.909352 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.909603 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.909585 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.914573 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.915659 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e827f391-2fcb-4758-ae5e-deef3c712e53-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.923414 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntdk\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-kube-api-access-nntdk\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.930529 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.934018 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.935128 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e827f391-2fcb-4758-ae5e-deef3c712e53-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " pod="openstack/rabbitmq-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.944664 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.945942 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.951278 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.955267 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.955413 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.955529 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.955669 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.955891 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-chr6p" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.956056 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 26 07:16:59 crc kubenswrapper[4909]: I1126 07:16:59.964077 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011266 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011310 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011345 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011408 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011426 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011537 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011602 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011648 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2zk8\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-kube-api-access-b2zk8\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011667 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011708 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.011726 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.016932 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.112800 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.112857 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.112886 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2zk8\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-kube-api-access-b2zk8\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.112906 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.112926 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.112944 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.112985 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.113004 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.113034 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.113078 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.113097 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.113285 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.113399 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.113738 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.114375 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.114784 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.115665 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.118020 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.118426 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.121303 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.124968 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.130927 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2zk8\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-kube-api-access-b2zk8\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.143647 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.307239 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.530444 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" event={"ID":"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee","Type":"ContainerStarted","Data":"5ba05713ba7d82d846a9f99251ab24d51faabd3564355ffa505ad98d1cc8cb5a"} Nov 26 07:17:00 crc kubenswrapper[4909]: I1126 07:17:00.609692 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.026453 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 07:17:01 crc kubenswrapper[4909]: W1126 07:17:01.044268 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37fbb13e_7e2e_451d_af0e_a648c4cde4c2.slice/crio-5308dd167f1c55fd869042c288c3cf397778b1bfef620eb4542966ccb671e15c WatchSource:0}: Error finding container 5308dd167f1c55fd869042c288c3cf397778b1bfef620eb4542966ccb671e15c: Status 404 returned error can't find the container with id 5308dd167f1c55fd869042c288c3cf397778b1bfef620eb4542966ccb671e15c Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.427977 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.429105 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.430723 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.435196 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.435407 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.435798 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.435809 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5cqhl" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.442734 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.458233 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554290 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-operator-scripts\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554626 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554725 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-kolla-config\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554770 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554795 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9kw\" (UniqueName: \"kubernetes.io/projected/10d0826f-4316-4c9a-bb8d-542fccd12a08-kube-api-access-dg9kw\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554824 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-secrets\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554847 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554863 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-generated\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.554876 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-default\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.584342 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"37fbb13e-7e2e-451d-af0e-a648c4cde4c2","Type":"ContainerStarted","Data":"5308dd167f1c55fd869042c288c3cf397778b1bfef620eb4542966ccb671e15c"} Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.597426 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e827f391-2fcb-4758-ae5e-deef3c712e53","Type":"ContainerStarted","Data":"8ff3db2f6cd1f90d3907b606bc71de8f11a3adb45789a6e7f610308b2ae7580f"} Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.658832 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-kolla-config\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.658907 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.658940 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg9kw\" (UniqueName: \"kubernetes.io/projected/10d0826f-4316-4c9a-bb8d-542fccd12a08-kube-api-access-dg9kw\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.658965 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-secrets\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.658991 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.659012 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-generated\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.660156 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-kolla-config\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.661010 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.661374 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-generated\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.662338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-default\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.666037 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-secrets\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.659032 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-default\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.667943 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-operator-scripts\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.667989 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.669431 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-operator-scripts\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.682621 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.688251 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.690536 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg9kw\" (UniqueName: \"kubernetes.io/projected/10d0826f-4316-4c9a-bb8d-542fccd12a08-kube-api-access-dg9kw\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.692777 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " pod="openstack/openstack-galera-0" Nov 26 07:17:01 crc kubenswrapper[4909]: I1126 07:17:01.769492 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.211283 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 07:17:02 crc kubenswrapper[4909]: W1126 07:17:02.227963 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10d0826f_4316_4c9a_bb8d_542fccd12a08.slice/crio-3bdf1053a53ebf6937c8fbe6d8f87b44ba3f8fb90e94d2e76de7e97ed5039dd4 WatchSource:0}: Error finding container 3bdf1053a53ebf6937c8fbe6d8f87b44ba3f8fb90e94d2e76de7e97ed5039dd4: Status 404 returned error can't find the container with id 3bdf1053a53ebf6937c8fbe6d8f87b44ba3f8fb90e94d2e76de7e97ed5039dd4 Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.609579 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10d0826f-4316-4c9a-bb8d-542fccd12a08","Type":"ContainerStarted","Data":"3bdf1053a53ebf6937c8fbe6d8f87b44ba3f8fb90e94d2e76de7e97ed5039dd4"} Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.617279 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.618790 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.620701 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-stqmz" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.621210 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.622115 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.622205 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.628469 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689472 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689516 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689548 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689682 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689718 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689744 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689762 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vfdd\" (UniqueName: \"kubernetes.io/projected/24fe368f-39d5-438d-baf0-4e66700131f4-kube-api-access-4vfdd\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689786 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.689819 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791491 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791546 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791579 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791647 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791686 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791714 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791737 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfdd\" (UniqueName: \"kubernetes.io/projected/24fe368f-39d5-438d-baf0-4e66700131f4-kube-api-access-4vfdd\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791772 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.791817 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.792195 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.794634 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.794949 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.795496 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.795850 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.807690 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.807788 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.808146 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.820713 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfdd\" (UniqueName: \"kubernetes.io/projected/24fe368f-39d5-438d-baf0-4e66700131f4-kube-api-access-4vfdd\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.880900 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:02 crc kubenswrapper[4909]: I1126 07:17:02.947457 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.160604 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.161792 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.163198 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mtktd" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.166625 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.181398 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.202309 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.204366 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.204431 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.204474 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-config-data\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.204755 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbj8h\" (UniqueName: \"kubernetes.io/projected/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kube-api-access-sbj8h\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.204801 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kolla-config\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.306725 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.306788 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.306810 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-config-data\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.306929 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbj8h\" (UniqueName: \"kubernetes.io/projected/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kube-api-access-sbj8h\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.306950 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kolla-config\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.311018 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-config-data\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.311096 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kolla-config\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.313160 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.329425 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.358685 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbj8h\" (UniqueName: \"kubernetes.io/projected/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kube-api-access-sbj8h\") pod \"memcached-0\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " pod="openstack/memcached-0" Nov 26 07:17:03 crc kubenswrapper[4909]: I1126 07:17:03.496055 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 07:17:04 crc kubenswrapper[4909]: I1126 07:17:04.816541 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:17:04 crc kubenswrapper[4909]: I1126 07:17:04.817811 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:17:04 crc kubenswrapper[4909]: I1126 07:17:04.821876 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-lsq9l" Nov 26 07:17:04 crc kubenswrapper[4909]: I1126 07:17:04.827305 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:17:04 crc kubenswrapper[4909]: I1126 07:17:04.941935 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr2kz\" (UniqueName: \"kubernetes.io/projected/b87acf53-c499-4454-b417-a54a78973b10-kube-api-access-wr2kz\") pod \"kube-state-metrics-0\" (UID: \"b87acf53-c499-4454-b417-a54a78973b10\") " pod="openstack/kube-state-metrics-0" Nov 26 07:17:05 crc kubenswrapper[4909]: I1126 07:17:05.042923 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr2kz\" (UniqueName: \"kubernetes.io/projected/b87acf53-c499-4454-b417-a54a78973b10-kube-api-access-wr2kz\") pod \"kube-state-metrics-0\" (UID: \"b87acf53-c499-4454-b417-a54a78973b10\") " pod="openstack/kube-state-metrics-0" Nov 26 07:17:05 crc kubenswrapper[4909]: I1126 07:17:05.064437 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr2kz\" (UniqueName: \"kubernetes.io/projected/b87acf53-c499-4454-b417-a54a78973b10-kube-api-access-wr2kz\") pod \"kube-state-metrics-0\" (UID: \"b87acf53-c499-4454-b417-a54a78973b10\") " pod="openstack/kube-state-metrics-0" Nov 26 07:17:05 crc kubenswrapper[4909]: I1126 07:17:05.146013 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.076983 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.078991 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.088620 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.093834 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.093874 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vgwj5" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.093966 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.105357 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.124071 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265149 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265238 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265287 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265322 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265352 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-config\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265368 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265637 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.265678 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7t9l\" (UniqueName: \"kubernetes.io/projected/edd763da-b7ea-4a61-846c-029eb54d9a08-kube-api-access-b7t9l\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367382 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367433 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7t9l\" (UniqueName: \"kubernetes.io/projected/edd763da-b7ea-4a61-846c-029eb54d9a08-kube-api-access-b7t9l\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367461 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367512 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367531 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367554 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367575 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-config\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367608 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.367855 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.368480 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-config\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.368678 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.369334 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.376800 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.379440 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.384556 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.385971 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sxvh6"] Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.387138 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.396766 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.397053 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.408924 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zkp4d" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.412539 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7t9l\" (UniqueName: \"kubernetes.io/projected/edd763da-b7ea-4a61-846c-029eb54d9a08-kube-api-access-b7t9l\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.413144 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.419694 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sxvh6"] Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.447907 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5f8k9"] Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.449714 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.452873 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5f8k9"] Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.570860 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-log-ovn\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.570941 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-run\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.570971 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-log\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571047 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571075 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run-ovn\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571100 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-combined-ca-bundle\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571126 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-etc-ovs\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571204 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbkg\" (UniqueName: \"kubernetes.io/projected/b793112e-ecec-4fb1-b06a-3bf4245af24b-kube-api-access-qqbkg\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571333 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bqh\" (UniqueName: \"kubernetes.io/projected/a74aad93-58f0-4023-95e3-3f0e92558f84-kube-api-access-k8bqh\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571356 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74aad93-58f0-4023-95e3-3f0e92558f84-scripts\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571393 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b793112e-ecec-4fb1-b06a-3bf4245af24b-scripts\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571421 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-ovn-controller-tls-certs\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.571439 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-lib\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673083 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-run\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673123 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-log\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673187 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673216 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run-ovn\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673239 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-combined-ca-bundle\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673262 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-etc-ovs\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673283 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbkg\" (UniqueName: \"kubernetes.io/projected/b793112e-ecec-4fb1-b06a-3bf4245af24b-kube-api-access-qqbkg\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673336 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bqh\" (UniqueName: \"kubernetes.io/projected/a74aad93-58f0-4023-95e3-3f0e92558f84-kube-api-access-k8bqh\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673360 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74aad93-58f0-4023-95e3-3f0e92558f84-scripts\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673390 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b793112e-ecec-4fb1-b06a-3bf4245af24b-scripts\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673417 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-ovn-controller-tls-certs\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673439 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-lib\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673468 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-log-ovn\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673714 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-log\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673783 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-run\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673815 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-log-ovn\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.673809 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.674055 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run-ovn\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.674080 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-lib\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.674275 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-etc-ovs\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.676089 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b793112e-ecec-4fb1-b06a-3bf4245af24b-scripts\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.676822 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-combined-ca-bundle\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.679790 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-ovn-controller-tls-certs\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.689623 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbkg\" (UniqueName: \"kubernetes.io/projected/b793112e-ecec-4fb1-b06a-3bf4245af24b-kube-api-access-qqbkg\") pod \"ovn-controller-ovs-5f8k9\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.693914 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74aad93-58f0-4023-95e3-3f0e92558f84-scripts\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.694789 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bqh\" (UniqueName: \"kubernetes.io/projected/a74aad93-58f0-4023-95e3-3f0e92558f84-kube-api-access-k8bqh\") pod \"ovn-controller-sxvh6\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.703053 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.782958 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:09 crc kubenswrapper[4909]: I1126 07:17:09.791008 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.620729 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.622325 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.622417 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.625920 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.630356 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.630539 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.630712 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-59lgq" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.711851 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.711909 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.711935 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.711994 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.712021 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.712047 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.712075 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pksq\" (UniqueName: \"kubernetes.io/projected/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-kube-api-access-5pksq\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.712553 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-config\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813554 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-config\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813633 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813656 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813673 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813705 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813748 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813767 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.813786 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pksq\" (UniqueName: \"kubernetes.io/projected/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-kube-api-access-5pksq\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.815006 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.815136 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-config\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.815364 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.815753 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.819939 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.823152 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.830115 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pksq\" (UniqueName: \"kubernetes.io/projected/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-kube-api-access-5pksq\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.832319 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.846083 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:11 crc kubenswrapper[4909]: I1126 07:17:11.974159 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.346618 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.347379 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mb2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2jgxd_openstack(7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.348718 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" podUID="7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.634032 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.634163 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp849,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8tl6p_openstack(7e04c2db-e792-4380-ab75-c274d8ef4777): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.635967 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" podUID="7e04c2db-e792-4380-ab75-c274d8ef4777" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.691090 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.691540 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qvt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-pxwwr_openstack(276e634e-b151-474a-8231-481adbdfc0b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.698390 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" podUID="276e634e-b151-474a-8231-481adbdfc0b5" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.747857 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.748066 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc7jk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pn7xl_openstack(2187b469-f631-45fe-bbf9-007050d474d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.749491 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" podUID="2187b469-f631-45fe-bbf9-007050d474d2" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.823292 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" podUID="276e634e-b151-474a-8231-481adbdfc0b5" Nov 26 07:17:25 crc kubenswrapper[4909]: E1126 07:17:25.823361 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" podUID="7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.038051 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.179434 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:17:26 crc kubenswrapper[4909]: W1126 07:17:26.181988 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb87acf53_c499_4454_b417_a54a78973b10.slice/crio-213fe145ccfadfc8f881b37ad349fec8de7c26712ef19e20bafc37e9578b95bf WatchSource:0}: Error finding container 213fe145ccfadfc8f881b37ad349fec8de7c26712ef19e20bafc37e9578b95bf: Status 404 returned error can't find the container with id 213fe145ccfadfc8f881b37ad349fec8de7c26712ef19e20bafc37e9578b95bf Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.185395 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sxvh6"] Nov 26 07:17:26 crc kubenswrapper[4909]: W1126 07:17:26.193099 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda74aad93_58f0_4023_95e3_3f0e92558f84.slice/crio-d8b74a9a473b27b0fd6dd61141c23e58b8373c6f5d5b09d5a70fd09f16f457cf WatchSource:0}: Error finding container d8b74a9a473b27b0fd6dd61141c23e58b8373c6f5d5b09d5a70fd09f16f457cf: Status 404 returned error can't find the container with id d8b74a9a473b27b0fd6dd61141c23e58b8373c6f5d5b09d5a70fd09f16f457cf Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.293859 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.330723 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 26 07:17:26 crc kubenswrapper[4909]: W1126 07:17:26.348693 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedd763da_b7ea_4a61_846c_029eb54d9a08.slice/crio-29db3239abfc8061ee8646dbd267af74c1583430d6ca7871592f839012e0448e WatchSource:0}: Error finding container 29db3239abfc8061ee8646dbd267af74c1583430d6ca7871592f839012e0448e: Status 404 returned error can't find the container with id 29db3239abfc8061ee8646dbd267af74c1583430d6ca7871592f839012e0448e Nov 26 07:17:26 crc kubenswrapper[4909]: W1126 07:17:26.349810 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9bdd84_9798_4dc8_8fc7_e8dda24b12c7.slice/crio-a4bb5a99315558850dc3ec94966927cdfa55355fdb705576d52657c717e68051 WatchSource:0}: Error finding container a4bb5a99315558850dc3ec94966927cdfa55355fdb705576d52657c717e68051: Status 404 returned error can't find the container with id a4bb5a99315558850dc3ec94966927cdfa55355fdb705576d52657c717e68051 Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.373026 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.396774 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.470274 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e04c2db-e792-4380-ab75-c274d8ef4777-config\") pod \"7e04c2db-e792-4380-ab75-c274d8ef4777\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.470324 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp849\" (UniqueName: \"kubernetes.io/projected/7e04c2db-e792-4380-ab75-c274d8ef4777-kube-api-access-mp849\") pod \"7e04c2db-e792-4380-ab75-c274d8ef4777\" (UID: \"7e04c2db-e792-4380-ab75-c274d8ef4777\") " Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.470360 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7jk\" (UniqueName: \"kubernetes.io/projected/2187b469-f631-45fe-bbf9-007050d474d2-kube-api-access-jc7jk\") pod \"2187b469-f631-45fe-bbf9-007050d474d2\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.470383 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-dns-svc\") pod \"2187b469-f631-45fe-bbf9-007050d474d2\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.470449 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-config\") pod \"2187b469-f631-45fe-bbf9-007050d474d2\" (UID: \"2187b469-f631-45fe-bbf9-007050d474d2\") " Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.471343 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e04c2db-e792-4380-ab75-c274d8ef4777-config" (OuterVolumeSpecName: "config") pod "7e04c2db-e792-4380-ab75-c274d8ef4777" (UID: "7e04c2db-e792-4380-ab75-c274d8ef4777"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.471707 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2187b469-f631-45fe-bbf9-007050d474d2" (UID: "2187b469-f631-45fe-bbf9-007050d474d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.472121 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-config" (OuterVolumeSpecName: "config") pod "2187b469-f631-45fe-bbf9-007050d474d2" (UID: "2187b469-f631-45fe-bbf9-007050d474d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.540398 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2187b469-f631-45fe-bbf9-007050d474d2-kube-api-access-jc7jk" (OuterVolumeSpecName: "kube-api-access-jc7jk") pod "2187b469-f631-45fe-bbf9-007050d474d2" (UID: "2187b469-f631-45fe-bbf9-007050d474d2"). InnerVolumeSpecName "kube-api-access-jc7jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.541873 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e04c2db-e792-4380-ab75-c274d8ef4777-kube-api-access-mp849" (OuterVolumeSpecName: "kube-api-access-mp849") pod "7e04c2db-e792-4380-ab75-c274d8ef4777" (UID: "7e04c2db-e792-4380-ab75-c274d8ef4777"). InnerVolumeSpecName "kube-api-access-mp849". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.572457 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp849\" (UniqueName: \"kubernetes.io/projected/7e04c2db-e792-4380-ab75-c274d8ef4777-kube-api-access-mp849\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.572491 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc7jk\" (UniqueName: \"kubernetes.io/projected/2187b469-f631-45fe-bbf9-007050d474d2-kube-api-access-jc7jk\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.572502 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.572511 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187b469-f631-45fe-bbf9-007050d474d2-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.572519 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e04c2db-e792-4380-ab75-c274d8ef4777-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.588488 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 07:17:26 crc kubenswrapper[4909]: W1126 07:17:26.598422 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ea1ebb8_6827_4f0b_a055_3b77e18755ac.slice/crio-940a369ef08c6a36d1303a67c572a6eea4aef5595c8f23221da0147127ee47b8 WatchSource:0}: Error finding container 940a369ef08c6a36d1303a67c572a6eea4aef5595c8f23221da0147127ee47b8: Status 404 returned error can't find the container with id 940a369ef08c6a36d1303a67c572a6eea4aef5595c8f23221da0147127ee47b8 Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.686581 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5f8k9"] Nov 26 07:17:26 crc kubenswrapper[4909]: W1126 07:17:26.699945 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb793112e_ecec_4fb1_b06a_3bf4245af24b.slice/crio-db7ec0671549e28556b79ea5289a1dd1c7d199414a38fc0ad2681a93a85b9aae WatchSource:0}: Error finding container db7ec0671549e28556b79ea5289a1dd1c7d199414a38fc0ad2681a93a85b9aae: Status 404 returned error can't find the container with id db7ec0671549e28556b79ea5289a1dd1c7d199414a38fc0ad2681a93a85b9aae Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.825433 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b87acf53-c499-4454-b417-a54a78973b10","Type":"ContainerStarted","Data":"213fe145ccfadfc8f881b37ad349fec8de7c26712ef19e20bafc37e9578b95bf"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.826757 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"edd763da-b7ea-4a61-846c-029eb54d9a08","Type":"ContainerStarted","Data":"29db3239abfc8061ee8646dbd267af74c1583430d6ca7871592f839012e0448e"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.828109 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" event={"ID":"7e04c2db-e792-4380-ab75-c274d8ef4777","Type":"ContainerDied","Data":"74413e12e66d05a61f6e28673c048034c4279b6672bd18947e4e2315d421df7c"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.828152 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tl6p" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.830362 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24fe368f-39d5-438d-baf0-4e66700131f4","Type":"ContainerStarted","Data":"3b60bdf9d2f27f1a4462ec1b693a6f574de16cc4ac333faddad603ec240eb169"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.830387 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24fe368f-39d5-438d-baf0-4e66700131f4","Type":"ContainerStarted","Data":"74ac97601a0ecf73149113f3e5fb334717b79b4d933a2e6b3b45d2d115fe8e32"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.833513 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sxvh6" event={"ID":"a74aad93-58f0-4023-95e3-3f0e92558f84","Type":"ContainerStarted","Data":"d8b74a9a473b27b0fd6dd61141c23e58b8373c6f5d5b09d5a70fd09f16f457cf"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.839306 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7","Type":"ContainerStarted","Data":"a4bb5a99315558850dc3ec94966927cdfa55355fdb705576d52657c717e68051"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.841725 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"37fbb13e-7e2e-451d-af0e-a648c4cde4c2","Type":"ContainerStarted","Data":"d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.845468 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10d0826f-4316-4c9a-bb8d-542fccd12a08","Type":"ContainerStarted","Data":"6a6d4b0e6968ecb97d91448fae9b055603a1b2cd7c5c064b8021fc4fd6cd7dee"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.846978 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" event={"ID":"2187b469-f631-45fe-bbf9-007050d474d2","Type":"ContainerDied","Data":"c4b74dbc36983646f69bb5dbab467a3c73ed0a78c340c6ed81a45d14a71166a8"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.847082 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pn7xl" Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.850396 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerStarted","Data":"db7ec0671549e28556b79ea5289a1dd1c7d199414a38fc0ad2681a93a85b9aae"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.863711 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5ea1ebb8-6827-4f0b-a055-3b77e18755ac","Type":"ContainerStarted","Data":"940a369ef08c6a36d1303a67c572a6eea4aef5595c8f23221da0147127ee47b8"} Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.894808 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tl6p"] Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.901749 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tl6p"] Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.970353 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn7xl"] Nov 26 07:17:26 crc kubenswrapper[4909]: I1126 07:17:26.976072 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pn7xl"] Nov 26 07:17:27 crc kubenswrapper[4909]: I1126 07:17:27.873119 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e827f391-2fcb-4758-ae5e-deef3c712e53","Type":"ContainerStarted","Data":"3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f"} Nov 26 07:17:28 crc kubenswrapper[4909]: I1126 07:17:28.524362 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2187b469-f631-45fe-bbf9-007050d474d2" path="/var/lib/kubelet/pods/2187b469-f631-45fe-bbf9-007050d474d2/volumes" Nov 26 07:17:28 crc kubenswrapper[4909]: I1126 07:17:28.525273 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e04c2db-e792-4380-ab75-c274d8ef4777" path="/var/lib/kubelet/pods/7e04c2db-e792-4380-ab75-c274d8ef4777/volumes" Nov 26 07:17:29 crc kubenswrapper[4909]: I1126 07:17:29.890711 4909 generic.go:334] "Generic (PLEG): container finished" podID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerID="6a6d4b0e6968ecb97d91448fae9b055603a1b2cd7c5c064b8021fc4fd6cd7dee" exitCode=0 Nov 26 07:17:29 crc kubenswrapper[4909]: I1126 07:17:29.890801 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10d0826f-4316-4c9a-bb8d-542fccd12a08","Type":"ContainerDied","Data":"6a6d4b0e6968ecb97d91448fae9b055603a1b2cd7c5c064b8021fc4fd6cd7dee"} Nov 26 07:17:29 crc kubenswrapper[4909]: I1126 07:17:29.893553 4909 generic.go:334] "Generic (PLEG): container finished" podID="24fe368f-39d5-438d-baf0-4e66700131f4" containerID="3b60bdf9d2f27f1a4462ec1b693a6f574de16cc4ac333faddad603ec240eb169" exitCode=0 Nov 26 07:17:29 crc kubenswrapper[4909]: I1126 07:17:29.893581 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24fe368f-39d5-438d-baf0-4e66700131f4","Type":"ContainerDied","Data":"3b60bdf9d2f27f1a4462ec1b693a6f574de16cc4ac333faddad603ec240eb169"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.907749 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b87acf53-c499-4454-b417-a54a78973b10","Type":"ContainerStarted","Data":"81887acfac475fba90044a7146243a1f3bbe2dbd1f0f3e8865736148b105cdd0"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.908380 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.910124 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"edd763da-b7ea-4a61-846c-029eb54d9a08","Type":"ContainerStarted","Data":"a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.912424 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24fe368f-39d5-438d-baf0-4e66700131f4","Type":"ContainerStarted","Data":"4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.914933 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7","Type":"ContainerStarted","Data":"c6f4d8f0f4e61dc21f3c450f4b1e6411451bb293632a6aacfce4bc571716303d"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.915510 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.917047 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerStarted","Data":"a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.921182 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10d0826f-4316-4c9a-bb8d-542fccd12a08","Type":"ContainerStarted","Data":"e851027dc323ea0e4c8353f7e34bc561fcaf6af5e2c46334b9918eabe5ff4a83"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.924201 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sxvh6" event={"ID":"a74aad93-58f0-4023-95e3-3f0e92558f84","Type":"ContainerStarted","Data":"93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.924359 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sxvh6" Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.926511 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5ea1ebb8-6827-4f0b-a055-3b77e18755ac","Type":"ContainerStarted","Data":"6799943ef5fdd232348a851fd58224528db5e2724e6f622defc9357dcbfa014e"} Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.928802 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.502402014 podStartE2EDuration="27.928785885s" podCreationTimestamp="2025-11-26 07:17:04 +0000 UTC" firstStartedPulling="2025-11-26 07:17:26.184054322 +0000 UTC m=+1018.330265508" lastFinishedPulling="2025-11-26 07:17:31.610438223 +0000 UTC m=+1023.756649379" observedRunningTime="2025-11-26 07:17:31.927845509 +0000 UTC m=+1024.074056675" watchObservedRunningTime="2025-11-26 07:17:31.928785885 +0000 UTC m=+1024.074997061" Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.948697 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sxvh6" podStartSLOduration=18.228922666 podStartE2EDuration="22.948678888s" podCreationTimestamp="2025-11-26 07:17:09 +0000 UTC" firstStartedPulling="2025-11-26 07:17:26.196070034 +0000 UTC m=+1018.342281210" lastFinishedPulling="2025-11-26 07:17:30.915826266 +0000 UTC m=+1023.062037432" observedRunningTime="2025-11-26 07:17:31.948442212 +0000 UTC m=+1024.094653378" watchObservedRunningTime="2025-11-26 07:17:31.948678888 +0000 UTC m=+1024.094890054" Nov 26 07:17:31 crc kubenswrapper[4909]: I1126 07:17:31.990809 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.990791048 podStartE2EDuration="30.990791048s" podCreationTimestamp="2025-11-26 07:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:17:31.984455089 +0000 UTC m=+1024.130666255" watchObservedRunningTime="2025-11-26 07:17:31.990791048 +0000 UTC m=+1024.137002214" Nov 26 07:17:32 crc kubenswrapper[4909]: I1126 07:17:32.000020 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.606833412 podStartE2EDuration="32.000003775s" podCreationTimestamp="2025-11-26 07:17:00 +0000 UTC" firstStartedPulling="2025-11-26 07:17:02.232842367 +0000 UTC m=+994.379053533" lastFinishedPulling="2025-11-26 07:17:25.62601273 +0000 UTC m=+1017.772223896" observedRunningTime="2025-11-26 07:17:31.99943366 +0000 UTC m=+1024.145644836" watchObservedRunningTime="2025-11-26 07:17:32.000003775 +0000 UTC m=+1024.146214951" Nov 26 07:17:32 crc kubenswrapper[4909]: I1126 07:17:32.016725 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.361812401 podStartE2EDuration="29.016706153s" podCreationTimestamp="2025-11-26 07:17:03 +0000 UTC" firstStartedPulling="2025-11-26 07:17:26.351858384 +0000 UTC m=+1018.498069550" lastFinishedPulling="2025-11-26 07:17:31.006752126 +0000 UTC m=+1023.152963302" observedRunningTime="2025-11-26 07:17:32.012732747 +0000 UTC m=+1024.158943913" watchObservedRunningTime="2025-11-26 07:17:32.016706153 +0000 UTC m=+1024.162917309" Nov 26 07:17:32 crc kubenswrapper[4909]: I1126 07:17:32.937226 4909 generic.go:334] "Generic (PLEG): container finished" podID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerID="a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97" exitCode=0 Nov 26 07:17:32 crc kubenswrapper[4909]: I1126 07:17:32.937520 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerDied","Data":"a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97"} Nov 26 07:17:32 crc kubenswrapper[4909]: I1126 07:17:32.948384 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:32 crc kubenswrapper[4909]: I1126 07:17:32.948431 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:33 crc kubenswrapper[4909]: I1126 07:17:33.963553 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerStarted","Data":"fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890"} Nov 26 07:17:33 crc kubenswrapper[4909]: I1126 07:17:33.963902 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerStarted","Data":"2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a"} Nov 26 07:17:33 crc kubenswrapper[4909]: I1126 07:17:33.963924 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:33 crc kubenswrapper[4909]: I1126 07:17:33.963941 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:17:33 crc kubenswrapper[4909]: I1126 07:17:33.994875 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5f8k9" podStartSLOduration=20.78104599 podStartE2EDuration="24.994850387s" podCreationTimestamp="2025-11-26 07:17:09 +0000 UTC" firstStartedPulling="2025-11-26 07:17:26.702326197 +0000 UTC m=+1018.848537353" lastFinishedPulling="2025-11-26 07:17:30.916130594 +0000 UTC m=+1023.062341750" observedRunningTime="2025-11-26 07:17:33.987646194 +0000 UTC m=+1026.133857370" watchObservedRunningTime="2025-11-26 07:17:33.994850387 +0000 UTC m=+1026.141061573" Nov 26 07:17:35 crc kubenswrapper[4909]: E1126 07:17:35.154528 4909 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.206:39770->38.129.56.206:33469: write tcp 38.129.56.206:39770->38.129.56.206:33469: write: broken pipe Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.001175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5ea1ebb8-6827-4f0b-a055-3b77e18755ac","Type":"ContainerStarted","Data":"bbec5715c551f88ea231efe57c9124f91b9b77cfb5ebea4c9e465ffb097ed605"} Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.004195 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"edd763da-b7ea-4a61-846c-029eb54d9a08","Type":"ContainerStarted","Data":"4daadf8992d503de7a3e7bc9d8d0fd7d0f19bd5aa98704266a78b9b676826679"} Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.024796 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.633565763 podStartE2EDuration="26.024775581s" podCreationTimestamp="2025-11-26 07:17:10 +0000 UTC" firstStartedPulling="2025-11-26 07:17:26.600827664 +0000 UTC m=+1018.747038830" lastFinishedPulling="2025-11-26 07:17:34.992037482 +0000 UTC m=+1027.138248648" observedRunningTime="2025-11-26 07:17:36.019874479 +0000 UTC m=+1028.166085655" watchObservedRunningTime="2025-11-26 07:17:36.024775581 +0000 UTC m=+1028.170986747" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.040225 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.386185885 podStartE2EDuration="28.040203484s" podCreationTimestamp="2025-11-26 07:17:08 +0000 UTC" firstStartedPulling="2025-11-26 07:17:26.35092788 +0000 UTC m=+1018.497139036" lastFinishedPulling="2025-11-26 07:17:35.004945469 +0000 UTC m=+1027.151156635" observedRunningTime="2025-11-26 07:17:36.03743293 +0000 UTC m=+1028.183644106" watchObservedRunningTime="2025-11-26 07:17:36.040203484 +0000 UTC m=+1028.186414660" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.309567 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8vd9g"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.311333 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.319266 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.333458 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8vd9g"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.465209 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2jgxd"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.474637 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.474702 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovs-rundir\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.474753 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbgn\" (UniqueName: \"kubernetes.io/projected/74ffd03c-7228-474b-830e-01f0be8998d5-kube-api-access-4dbgn\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.474807 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovn-rundir\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.474842 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-combined-ca-bundle\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.474867 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ffd03c-7228-474b-830e-01f0be8998d5-config\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.530895 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4g5pn"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.532395 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.534389 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.540129 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4g5pn"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.576678 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.576729 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovs-rundir\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.576772 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dbgn\" (UniqueName: \"kubernetes.io/projected/74ffd03c-7228-474b-830e-01f0be8998d5-kube-api-access-4dbgn\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.576788 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovn-rundir\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.576816 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-combined-ca-bundle\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.576835 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ffd03c-7228-474b-830e-01f0be8998d5-config\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.577511 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ffd03c-7228-474b-830e-01f0be8998d5-config\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.577774 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovn-rundir\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.578873 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovs-rundir\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.585221 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-combined-ca-bundle\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.591103 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.604858 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dbgn\" (UniqueName: \"kubernetes.io/projected/74ffd03c-7228-474b-830e-01f0be8998d5-kube-api-access-4dbgn\") pod \"ovn-controller-metrics-8vd9g\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.616605 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pxwwr"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.643917 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8ssxt"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.645843 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.652968 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.664263 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.678409 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.678515 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xxxd\" (UniqueName: \"kubernetes.io/projected/a4aa5148-793f-4241-9953-d2e6477f52a3-kube-api-access-6xxxd\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.678548 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.678806 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-config\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.700314 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8ssxt"] Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.707362 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.782999 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-config\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783051 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzmx\" (UniqueName: \"kubernetes.io/projected/d0b52760-1541-434d-ba86-743f5fbfbcb8-kube-api-access-krzmx\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783074 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783103 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-config\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783125 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783160 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xxxd\" (UniqueName: \"kubernetes.io/projected/a4aa5148-793f-4241-9953-d2e6477f52a3-kube-api-access-6xxxd\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783180 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783232 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.783260 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.784368 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-config\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.784923 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.785672 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.813867 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xxxd\" (UniqueName: \"kubernetes.io/projected/a4aa5148-793f-4241-9953-d2e6477f52a3-kube-api-access-6xxxd\") pod \"dnsmasq-dns-7f896c8c65-4g5pn\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.857376 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.884747 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzmx\" (UniqueName: \"kubernetes.io/projected/d0b52760-1541-434d-ba86-743f5fbfbcb8-kube-api-access-krzmx\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.884865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-config\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.884897 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.887294 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.887396 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.889855 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.890843 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-config\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.891668 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.892629 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.899423 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.921421 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzmx\" (UniqueName: \"kubernetes.io/projected/d0b52760-1541-434d-ba86-743f5fbfbcb8-kube-api-access-krzmx\") pod \"dnsmasq-dns-86db49b7ff-8ssxt\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.965350 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.974734 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:36 crc kubenswrapper[4909]: I1126 07:17:36.987042 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.019666 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" event={"ID":"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee","Type":"ContainerDied","Data":"5ba05713ba7d82d846a9f99251ab24d51faabd3564355ffa505ad98d1cc8cb5a"} Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.019749 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2jgxd" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.021303 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.052413 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.075645 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.092346 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-dns-svc\") pod \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.092435 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-config\") pod \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.092511 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mb2d\" (UniqueName: \"kubernetes.io/projected/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-kube-api-access-7mb2d\") pod \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\" (UID: \"7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee\") " Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.092895 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee" (UID: "7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.092964 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.093337 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-config" (OuterVolumeSpecName: "config") pod "7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee" (UID: "7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.102880 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-kube-api-access-7mb2d" (OuterVolumeSpecName: "kube-api-access-7mb2d") pod "7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee" (UID: "7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee"). InnerVolumeSpecName "kube-api-access-7mb2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.193525 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qvt8\" (UniqueName: \"kubernetes.io/projected/276e634e-b151-474a-8231-481adbdfc0b5-kube-api-access-9qvt8\") pod \"276e634e-b151-474a-8231-481adbdfc0b5\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.193575 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-config\") pod \"276e634e-b151-474a-8231-481adbdfc0b5\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.193724 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-dns-svc\") pod \"276e634e-b151-474a-8231-481adbdfc0b5\" (UID: \"276e634e-b151-474a-8231-481adbdfc0b5\") " Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.194166 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.194184 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mb2d\" (UniqueName: \"kubernetes.io/projected/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee-kube-api-access-7mb2d\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.196096 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-config" (OuterVolumeSpecName: "config") pod "276e634e-b151-474a-8231-481adbdfc0b5" (UID: "276e634e-b151-474a-8231-481adbdfc0b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.196176 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "276e634e-b151-474a-8231-481adbdfc0b5" (UID: "276e634e-b151-474a-8231-481adbdfc0b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.198456 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/276e634e-b151-474a-8231-481adbdfc0b5-kube-api-access-9qvt8" (OuterVolumeSpecName: "kube-api-access-9qvt8") pod "276e634e-b151-474a-8231-481adbdfc0b5" (UID: "276e634e-b151-474a-8231-481adbdfc0b5"). InnerVolumeSpecName "kube-api-access-9qvt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.296008 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.296051 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qvt8\" (UniqueName: \"kubernetes.io/projected/276e634e-b151-474a-8231-481adbdfc0b5-kube-api-access-9qvt8\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.296063 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276e634e-b151-474a-8231-481adbdfc0b5-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.388961 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2jgxd"] Nov 26 07:17:37 crc kubenswrapper[4909]: W1126 07:17:37.402470 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74ffd03c_7228_474b_830e_01f0be8998d5.slice/crio-98013ce43edf17179601ca531c0127bc242978353928c0517537fa99294165a3 WatchSource:0}: Error finding container 98013ce43edf17179601ca531c0127bc242978353928c0517537fa99294165a3: Status 404 returned error can't find the container with id 98013ce43edf17179601ca531c0127bc242978353928c0517537fa99294165a3 Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.404657 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2jgxd"] Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.418249 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8vd9g"] Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.508309 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8ssxt"] Nov 26 07:17:37 crc kubenswrapper[4909]: I1126 07:17:37.514382 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4g5pn"] Nov 26 07:17:37 crc kubenswrapper[4909]: W1126 07:17:37.516391 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0b52760_1541_434d_ba86_743f5fbfbcb8.slice/crio-fd3f601979321606154a37409ef0862b5782b88e7f33a459a9739195247ac493 WatchSource:0}: Error finding container fd3f601979321606154a37409ef0862b5782b88e7f33a459a9739195247ac493: Status 404 returned error can't find the container with id fd3f601979321606154a37409ef0862b5782b88e7f33a459a9739195247ac493 Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.030573 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8vd9g" event={"ID":"74ffd03c-7228-474b-830e-01f0be8998d5","Type":"ContainerStarted","Data":"5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d"} Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.030855 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8vd9g" event={"ID":"74ffd03c-7228-474b-830e-01f0be8998d5","Type":"ContainerStarted","Data":"98013ce43edf17179601ca531c0127bc242978353928c0517537fa99294165a3"} Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.040745 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" event={"ID":"d0b52760-1541-434d-ba86-743f5fbfbcb8","Type":"ContainerStarted","Data":"fd3f601979321606154a37409ef0862b5782b88e7f33a459a9739195247ac493"} Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.042697 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" event={"ID":"276e634e-b151-474a-8231-481adbdfc0b5","Type":"ContainerDied","Data":"926957347cf8c5b324ba25f4f805f07625106a9d92421f1a2524bd9275e78560"} Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.042808 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pxwwr" Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.045149 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" event={"ID":"a4aa5148-793f-4241-9953-d2e6477f52a3","Type":"ContainerStarted","Data":"301b362e8f7f7127934e3f19d918c212209b62425ac86abdb296e0d58c09e896"} Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.052658 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8vd9g" podStartSLOduration=2.052638759 podStartE2EDuration="2.052638759s" podCreationTimestamp="2025-11-26 07:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:17:38.044786828 +0000 UTC m=+1030.190997994" watchObservedRunningTime="2025-11-26 07:17:38.052638759 +0000 UTC m=+1030.198849915" Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.162997 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pxwwr"] Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.174765 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pxwwr"] Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.497755 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.506825 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="276e634e-b151-474a-8231-481adbdfc0b5" path="/var/lib/kubelet/pods/276e634e-b151-474a-8231-481adbdfc0b5/volumes" Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.507172 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee" path="/var/lib/kubelet/pods/7f53f109-3ddd-4f4d-bb1d-2cba92fdfbee/volumes" Nov 26 07:17:38 crc kubenswrapper[4909]: I1126 07:17:38.974291 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.017883 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.052865 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.054257 4909 generic.go:334] "Generic (PLEG): container finished" podID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerID="78d52cacdc11017a35ef99767ed6e1cb3ca1441d05a540d46419f108c68b215e" exitCode=0 Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.054402 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" event={"ID":"a4aa5148-793f-4241-9953-d2e6477f52a3","Type":"ContainerDied","Data":"78d52cacdc11017a35ef99767ed6e1cb3ca1441d05a540d46419f108c68b215e"} Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.058074 4909 generic.go:334] "Generic (PLEG): container finished" podID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerID="9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d" exitCode=0 Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.058161 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" event={"ID":"d0b52760-1541-434d-ba86-743f5fbfbcb8","Type":"ContainerDied","Data":"9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d"} Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.123050 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.154787 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.416555 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.418138 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.428020 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.428493 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.428614 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.428908 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wwx8w" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.451008 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.537133 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-scripts\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.537204 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.537258 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-config\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.537310 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.537381 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.537397 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxwr\" (UniqueName: \"kubernetes.io/projected/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-kube-api-access-wlxwr\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.537448 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.639085 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.639179 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-config\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.639228 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.639331 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.639355 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxwr\" (UniqueName: \"kubernetes.io/projected/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-kube-api-access-wlxwr\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.639393 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.639423 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-scripts\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.640130 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.640480 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-scripts\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.640777 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-config\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.646782 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.646897 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.648086 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.655159 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxwr\" (UniqueName: \"kubernetes.io/projected/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-kube-api-access-wlxwr\") pod \"ovn-northd-0\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " pod="openstack/ovn-northd-0" Nov 26 07:17:39 crc kubenswrapper[4909]: I1126 07:17:39.748698 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 07:17:40 crc kubenswrapper[4909]: I1126 07:17:40.071287 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" event={"ID":"d0b52760-1541-434d-ba86-743f5fbfbcb8","Type":"ContainerStarted","Data":"254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a"} Nov 26 07:17:40 crc kubenswrapper[4909]: I1126 07:17:40.071867 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:40 crc kubenswrapper[4909]: I1126 07:17:40.071887 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" event={"ID":"a4aa5148-793f-4241-9953-d2e6477f52a3","Type":"ContainerStarted","Data":"2e7005e7d64211aa2609f1e2a1b76d67af0cff9e2d869ff14189cb4070c7a743"} Nov 26 07:17:40 crc kubenswrapper[4909]: I1126 07:17:40.089768 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" podStartSLOduration=3.59239774 podStartE2EDuration="4.089748924s" podCreationTimestamp="2025-11-26 07:17:36 +0000 UTC" firstStartedPulling="2025-11-26 07:17:37.518492237 +0000 UTC m=+1029.664703403" lastFinishedPulling="2025-11-26 07:17:38.015843421 +0000 UTC m=+1030.162054587" observedRunningTime="2025-11-26 07:17:40.084652937 +0000 UTC m=+1032.230864133" watchObservedRunningTime="2025-11-26 07:17:40.089748924 +0000 UTC m=+1032.235960110" Nov 26 07:17:40 crc kubenswrapper[4909]: I1126 07:17:40.108654 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" podStartSLOduration=3.682071107 podStartE2EDuration="4.108628151s" podCreationTimestamp="2025-11-26 07:17:36 +0000 UTC" firstStartedPulling="2025-11-26 07:17:37.514287775 +0000 UTC m=+1029.660498941" lastFinishedPulling="2025-11-26 07:17:37.940844819 +0000 UTC m=+1030.087055985" observedRunningTime="2025-11-26 07:17:40.10078144 +0000 UTC m=+1032.246992616" watchObservedRunningTime="2025-11-26 07:17:40.108628151 +0000 UTC m=+1032.254839337" Nov 26 07:17:40 crc kubenswrapper[4909]: I1126 07:17:40.230929 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 07:17:41 crc kubenswrapper[4909]: I1126 07:17:41.105895 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1f85aa19-7a2b-461e-9f33-6ba3f3261da4","Type":"ContainerStarted","Data":"0ed0e2452160ef4ae95ae50c75b48257094cd6d0cbd42b0102d2bb54ef54c6ff"} Nov 26 07:17:41 crc kubenswrapper[4909]: I1126 07:17:41.106243 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:41 crc kubenswrapper[4909]: I1126 07:17:41.770292 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 26 07:17:41 crc kubenswrapper[4909]: I1126 07:17:41.770730 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 26 07:17:41 crc kubenswrapper[4909]: I1126 07:17:41.812485 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 26 07:17:42 crc kubenswrapper[4909]: I1126 07:17:42.118432 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1f85aa19-7a2b-461e-9f33-6ba3f3261da4","Type":"ContainerStarted","Data":"961f74545256d62a34bc75e2a3d148f6d0a38e6f3d41c1cc128a6f4f1eccd8f1"} Nov 26 07:17:42 crc kubenswrapper[4909]: I1126 07:17:42.118486 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1f85aa19-7a2b-461e-9f33-6ba3f3261da4","Type":"ContainerStarted","Data":"c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5"} Nov 26 07:17:42 crc kubenswrapper[4909]: I1126 07:17:42.149963 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.141597136 podStartE2EDuration="3.149934709s" podCreationTimestamp="2025-11-26 07:17:39 +0000 UTC" firstStartedPulling="2025-11-26 07:17:40.236396029 +0000 UTC m=+1032.382607195" lastFinishedPulling="2025-11-26 07:17:41.244733602 +0000 UTC m=+1033.390944768" observedRunningTime="2025-11-26 07:17:42.139032397 +0000 UTC m=+1034.285243573" watchObservedRunningTime="2025-11-26 07:17:42.149934709 +0000 UTC m=+1034.296145915" Nov 26 07:17:42 crc kubenswrapper[4909]: I1126 07:17:42.169046 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.124765 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.197545 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-82vbv"] Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.198539 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-82vbv" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.214951 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-82vbv"] Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.295838 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdcx\" (UniqueName: \"kubernetes.io/projected/817235db-6c0f-43f5-8328-6eee7baf5839-kube-api-access-hvdcx\") pod \"keystone-db-create-82vbv\" (UID: \"817235db-6c0f-43f5-8328-6eee7baf5839\") " pod="openstack/keystone-db-create-82vbv" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.397363 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvdcx\" (UniqueName: \"kubernetes.io/projected/817235db-6c0f-43f5-8328-6eee7baf5839-kube-api-access-hvdcx\") pod \"keystone-db-create-82vbv\" (UID: \"817235db-6c0f-43f5-8328-6eee7baf5839\") " pod="openstack/keystone-db-create-82vbv" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.415621 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvdcx\" (UniqueName: \"kubernetes.io/projected/817235db-6c0f-43f5-8328-6eee7baf5839-kube-api-access-hvdcx\") pod \"keystone-db-create-82vbv\" (UID: \"817235db-6c0f-43f5-8328-6eee7baf5839\") " pod="openstack/keystone-db-create-82vbv" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.447739 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zk568"] Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.448680 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zk568" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.458114 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zk568"] Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.498188 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqsx4\" (UniqueName: \"kubernetes.io/projected/8b71a783-4ce8-4d76-8023-65f4bc62bb61-kube-api-access-wqsx4\") pod \"placement-db-create-zk568\" (UID: \"8b71a783-4ce8-4d76-8023-65f4bc62bb61\") " pod="openstack/placement-db-create-zk568" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.515557 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-82vbv" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.599787 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqsx4\" (UniqueName: \"kubernetes.io/projected/8b71a783-4ce8-4d76-8023-65f4bc62bb61-kube-api-access-wqsx4\") pod \"placement-db-create-zk568\" (UID: \"8b71a783-4ce8-4d76-8023-65f4bc62bb61\") " pod="openstack/placement-db-create-zk568" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.617729 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqsx4\" (UniqueName: \"kubernetes.io/projected/8b71a783-4ce8-4d76-8023-65f4bc62bb61-kube-api-access-wqsx4\") pod \"placement-db-create-zk568\" (UID: \"8b71a783-4ce8-4d76-8023-65f4bc62bb61\") " pod="openstack/placement-db-create-zk568" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.770761 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zk568" Nov 26 07:17:43 crc kubenswrapper[4909]: I1126 07:17:43.925739 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-82vbv"] Nov 26 07:17:44 crc kubenswrapper[4909]: I1126 07:17:44.132303 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-82vbv" event={"ID":"817235db-6c0f-43f5-8328-6eee7baf5839","Type":"ContainerStarted","Data":"e3e7f230581faf36b2c79ad68aca0174468cc8fd033f7e484814c3189b2ac392"} Nov 26 07:17:44 crc kubenswrapper[4909]: I1126 07:17:44.132619 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-82vbv" event={"ID":"817235db-6c0f-43f5-8328-6eee7baf5839","Type":"ContainerStarted","Data":"32f82b39420b2d865c23760a911b125f3c25eeb80824694c536e0097340ed29a"} Nov 26 07:17:44 crc kubenswrapper[4909]: I1126 07:17:44.148278 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-82vbv" podStartSLOduration=1.14825563 podStartE2EDuration="1.14825563s" podCreationTimestamp="2025-11-26 07:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:17:44.145663979 +0000 UTC m=+1036.291875145" watchObservedRunningTime="2025-11-26 07:17:44.14825563 +0000 UTC m=+1036.294466796" Nov 26 07:17:44 crc kubenswrapper[4909]: I1126 07:17:44.185817 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zk568"] Nov 26 07:17:44 crc kubenswrapper[4909]: W1126 07:17:44.216453 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b71a783_4ce8_4d76_8023_65f4bc62bb61.slice/crio-91a54fe15d0c32fdf112b917c683e1c51a5a19110324a792015b14df1ced82aa WatchSource:0}: Error finding container 91a54fe15d0c32fdf112b917c683e1c51a5a19110324a792015b14df1ced82aa: Status 404 returned error can't find the container with id 91a54fe15d0c32fdf112b917c683e1c51a5a19110324a792015b14df1ced82aa Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.107773 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4g5pn"] Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.108242 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" podUID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerName="dnsmasq-dns" containerID="cri-o://2e7005e7d64211aa2609f1e2a1b76d67af0cff9e2d869ff14189cb4070c7a743" gracePeriod=10 Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.111149 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.206359 4909 generic.go:334] "Generic (PLEG): container finished" podID="8b71a783-4ce8-4d76-8023-65f4bc62bb61" containerID="7ffc69ceeef9cb263000a0891df54bc89be9425b8af470572c9407553344e65c" exitCode=0 Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.206434 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zk568" event={"ID":"8b71a783-4ce8-4d76-8023-65f4bc62bb61","Type":"ContainerDied","Data":"7ffc69ceeef9cb263000a0891df54bc89be9425b8af470572c9407553344e65c"} Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.206472 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zk568" event={"ID":"8b71a783-4ce8-4d76-8023-65f4bc62bb61","Type":"ContainerStarted","Data":"91a54fe15d0c32fdf112b917c683e1c51a5a19110324a792015b14df1ced82aa"} Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.207935 4909 generic.go:334] "Generic (PLEG): container finished" podID="817235db-6c0f-43f5-8328-6eee7baf5839" containerID="e3e7f230581faf36b2c79ad68aca0174468cc8fd033f7e484814c3189b2ac392" exitCode=0 Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.207961 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-82vbv" event={"ID":"817235db-6c0f-43f5-8328-6eee7baf5839","Type":"ContainerDied","Data":"e3e7f230581faf36b2c79ad68aca0174468cc8fd033f7e484814c3189b2ac392"} Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.248565 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-wq7zz"] Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.255487 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.259103 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.260647 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wq7zz"] Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.457140 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.457203 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6q2v\" (UniqueName: \"kubernetes.io/projected/90345f3d-54b4-4d46-87b1-df25e4e017b1-kube-api-access-h6q2v\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.457240 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-dns-svc\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.457270 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-config\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.457293 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.558669 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.558730 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6q2v\" (UniqueName: \"kubernetes.io/projected/90345f3d-54b4-4d46-87b1-df25e4e017b1-kube-api-access-h6q2v\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.558765 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-dns-svc\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.558792 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-config\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.558816 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.559905 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.560458 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-dns-svc\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.560677 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.561276 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-config\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.586019 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6q2v\" (UniqueName: \"kubernetes.io/projected/90345f3d-54b4-4d46-87b1-df25e4e017b1-kube-api-access-h6q2v\") pod \"dnsmasq-dns-698758b865-wq7zz\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:45 crc kubenswrapper[4909]: I1126 07:17:45.588816 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.017036 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wq7zz"] Nov 26 07:17:46 crc kubenswrapper[4909]: W1126 07:17:46.019255 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90345f3d_54b4_4d46_87b1_df25e4e017b1.slice/crio-0fdcb0d9c49003ba06546150e1af4df0250f6394a8b353cec99c78390dcaef53 WatchSource:0}: Error finding container 0fdcb0d9c49003ba06546150e1af4df0250f6394a8b353cec99c78390dcaef53: Status 404 returned error can't find the container with id 0fdcb0d9c49003ba06546150e1af4df0250f6394a8b353cec99c78390dcaef53 Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.216040 4909 generic.go:334] "Generic (PLEG): container finished" podID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerID="2e7005e7d64211aa2609f1e2a1b76d67af0cff9e2d869ff14189cb4070c7a743" exitCode=0 Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.216114 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" event={"ID":"a4aa5148-793f-4241-9953-d2e6477f52a3","Type":"ContainerDied","Data":"2e7005e7d64211aa2609f1e2a1b76d67af0cff9e2d869ff14189cb4070c7a743"} Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.217377 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wq7zz" event={"ID":"90345f3d-54b4-4d46-87b1-df25e4e017b1","Type":"ContainerStarted","Data":"0fdcb0d9c49003ba06546150e1af4df0250f6394a8b353cec99c78390dcaef53"} Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.333034 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.343193 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.347916 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.348146 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.348331 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.348512 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8bhpq" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.372770 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.454794 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-82vbv" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.474794 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kmz\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-kube-api-access-d5kmz\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.475057 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-lock\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.475265 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-cache\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.475385 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.475728 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.577229 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvdcx\" (UniqueName: \"kubernetes.io/projected/817235db-6c0f-43f5-8328-6eee7baf5839-kube-api-access-hvdcx\") pod \"817235db-6c0f-43f5-8328-6eee7baf5839\" (UID: \"817235db-6c0f-43f5-8328-6eee7baf5839\") " Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.577743 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-cache\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.577787 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.577833 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.577911 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kmz\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-kube-api-access-d5kmz\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.577938 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-lock\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.578379 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-cache\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: E1126 07:17:46.578571 4909 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 07:17:46 crc kubenswrapper[4909]: E1126 07:17:46.578615 4909 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 07:17:46 crc kubenswrapper[4909]: E1126 07:17:46.578669 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift podName:93f8db39-0460-4b6a-89fe-0e9bb565462e nodeName:}" failed. No retries permitted until 2025-11-26 07:17:47.078648648 +0000 UTC m=+1039.224859894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift") pod "swift-storage-0" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e") : configmap "swift-ring-files" not found Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.578989 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.579680 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-lock\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.587884 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817235db-6c0f-43f5-8328-6eee7baf5839-kube-api-access-hvdcx" (OuterVolumeSpecName: "kube-api-access-hvdcx") pod "817235db-6c0f-43f5-8328-6eee7baf5839" (UID: "817235db-6c0f-43f5-8328-6eee7baf5839"). InnerVolumeSpecName "kube-api-access-hvdcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.611851 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kmz\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-kube-api-access-d5kmz\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.625738 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.680143 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvdcx\" (UniqueName: \"kubernetes.io/projected/817235db-6c0f-43f5-8328-6eee7baf5839-kube-api-access-hvdcx\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.689919 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zk568" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.781107 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqsx4\" (UniqueName: \"kubernetes.io/projected/8b71a783-4ce8-4d76-8023-65f4bc62bb61-kube-api-access-wqsx4\") pod \"8b71a783-4ce8-4d76-8023-65f4bc62bb61\" (UID: \"8b71a783-4ce8-4d76-8023-65f4bc62bb61\") " Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.786159 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b71a783-4ce8-4d76-8023-65f4bc62bb61-kube-api-access-wqsx4" (OuterVolumeSpecName: "kube-api-access-wqsx4") pod "8b71a783-4ce8-4d76-8023-65f4bc62bb61" (UID: "8b71a783-4ce8-4d76-8023-65f4bc62bb61"). InnerVolumeSpecName "kube-api-access-wqsx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.854656 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.883736 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqsx4\" (UniqueName: \"kubernetes.io/projected/8b71a783-4ce8-4d76-8023-65f4bc62bb61-kube-api-access-wqsx4\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.985267 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-config\") pod \"a4aa5148-793f-4241-9953-d2e6477f52a3\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.985579 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-ovsdbserver-sb\") pod \"a4aa5148-793f-4241-9953-d2e6477f52a3\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.985648 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-dns-svc\") pod \"a4aa5148-793f-4241-9953-d2e6477f52a3\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.985698 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xxxd\" (UniqueName: \"kubernetes.io/projected/a4aa5148-793f-4241-9953-d2e6477f52a3-kube-api-access-6xxxd\") pod \"a4aa5148-793f-4241-9953-d2e6477f52a3\" (UID: \"a4aa5148-793f-4241-9953-d2e6477f52a3\") " Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.990257 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4aa5148-793f-4241-9953-d2e6477f52a3-kube-api-access-6xxxd" (OuterVolumeSpecName: "kube-api-access-6xxxd") pod "a4aa5148-793f-4241-9953-d2e6477f52a3" (UID: "a4aa5148-793f-4241-9953-d2e6477f52a3"). InnerVolumeSpecName "kube-api-access-6xxxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:46 crc kubenswrapper[4909]: I1126 07:17:46.990357 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.037106 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-config" (OuterVolumeSpecName: "config") pod "a4aa5148-793f-4241-9953-d2e6477f52a3" (UID: "a4aa5148-793f-4241-9953-d2e6477f52a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.047210 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4aa5148-793f-4241-9953-d2e6477f52a3" (UID: "a4aa5148-793f-4241-9953-d2e6477f52a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.055208 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4aa5148-793f-4241-9953-d2e6477f52a3" (UID: "a4aa5148-793f-4241-9953-d2e6477f52a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.088128 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.088306 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.088320 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.088335 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xxxd\" (UniqueName: \"kubernetes.io/projected/a4aa5148-793f-4241-9953-d2e6477f52a3-kube-api-access-6xxxd\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.088349 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5148-793f-4241-9953-d2e6477f52a3-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:47 crc kubenswrapper[4909]: E1126 07:17:47.089364 4909 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 07:17:47 crc kubenswrapper[4909]: E1126 07:17:47.089388 4909 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 07:17:47 crc kubenswrapper[4909]: E1126 07:17:47.089431 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift podName:93f8db39-0460-4b6a-89fe-0e9bb565462e nodeName:}" failed. No retries permitted until 2025-11-26 07:17:48.089416903 +0000 UTC m=+1040.235628079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift") pod "swift-storage-0" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e") : configmap "swift-ring-files" not found Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.225453 4909 generic.go:334] "Generic (PLEG): container finished" podID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerID="3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c" exitCode=0 Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.225532 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wq7zz" event={"ID":"90345f3d-54b4-4d46-87b1-df25e4e017b1","Type":"ContainerDied","Data":"3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c"} Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.227013 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-82vbv" event={"ID":"817235db-6c0f-43f5-8328-6eee7baf5839","Type":"ContainerDied","Data":"32f82b39420b2d865c23760a911b125f3c25eeb80824694c536e0097340ed29a"} Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.227042 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f82b39420b2d865c23760a911b125f3c25eeb80824694c536e0097340ed29a" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.227099 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-82vbv" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.230803 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zk568" event={"ID":"8b71a783-4ce8-4d76-8023-65f4bc62bb61","Type":"ContainerDied","Data":"91a54fe15d0c32fdf112b917c683e1c51a5a19110324a792015b14df1ced82aa"} Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.230824 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zk568" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.230842 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a54fe15d0c32fdf112b917c683e1c51a5a19110324a792015b14df1ced82aa" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.233702 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" event={"ID":"a4aa5148-793f-4241-9953-d2e6477f52a3","Type":"ContainerDied","Data":"301b362e8f7f7127934e3f19d918c212209b62425ac86abdb296e0d58c09e896"} Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.233727 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-4g5pn" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.233756 4909 scope.go:117] "RemoveContainer" containerID="2e7005e7d64211aa2609f1e2a1b76d67af0cff9e2d869ff14189cb4070c7a743" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.254442 4909 scope.go:117] "RemoveContainer" containerID="78d52cacdc11017a35ef99767ed6e1cb3ca1441d05a540d46419f108c68b215e" Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.302538 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4g5pn"] Nov 26 07:17:47 crc kubenswrapper[4909]: I1126 07:17:47.320062 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-4g5pn"] Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.115030 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:48 crc kubenswrapper[4909]: E1126 07:17:48.115223 4909 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 07:17:48 crc kubenswrapper[4909]: E1126 07:17:48.115327 4909 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 07:17:48 crc kubenswrapper[4909]: E1126 07:17:48.115380 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift podName:93f8db39-0460-4b6a-89fe-0e9bb565462e nodeName:}" failed. No retries permitted until 2025-11-26 07:17:50.115362271 +0000 UTC m=+1042.261573437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift") pod "swift-storage-0" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e") : configmap "swift-ring-files" not found Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.243617 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wq7zz" event={"ID":"90345f3d-54b4-4d46-87b1-df25e4e017b1","Type":"ContainerStarted","Data":"563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99"} Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.243731 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.261711 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-wq7zz" podStartSLOduration=3.261692088 podStartE2EDuration="3.261692088s" podCreationTimestamp="2025-11-26 07:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:17:48.259137489 +0000 UTC m=+1040.405348695" watchObservedRunningTime="2025-11-26 07:17:48.261692088 +0000 UTC m=+1040.407903274" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.508531 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4aa5148-793f-4241-9953-d2e6477f52a3" path="/var/lib/kubelet/pods/a4aa5148-793f-4241-9953-d2e6477f52a3/volumes" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.724797 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rwhgj"] Nov 26 07:17:48 crc kubenswrapper[4909]: E1126 07:17:48.725203 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b71a783-4ce8-4d76-8023-65f4bc62bb61" containerName="mariadb-database-create" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.725226 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b71a783-4ce8-4d76-8023-65f4bc62bb61" containerName="mariadb-database-create" Nov 26 07:17:48 crc kubenswrapper[4909]: E1126 07:17:48.725248 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817235db-6c0f-43f5-8328-6eee7baf5839" containerName="mariadb-database-create" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.725256 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="817235db-6c0f-43f5-8328-6eee7baf5839" containerName="mariadb-database-create" Nov 26 07:17:48 crc kubenswrapper[4909]: E1126 07:17:48.725268 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerName="init" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.725275 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerName="init" Nov 26 07:17:48 crc kubenswrapper[4909]: E1126 07:17:48.725293 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerName="dnsmasq-dns" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.725300 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerName="dnsmasq-dns" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.725497 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aa5148-793f-4241-9953-d2e6477f52a3" containerName="dnsmasq-dns" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.725517 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b71a783-4ce8-4d76-8023-65f4bc62bb61" containerName="mariadb-database-create" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.725530 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="817235db-6c0f-43f5-8328-6eee7baf5839" containerName="mariadb-database-create" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.726169 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rwhgj" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.732476 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rwhgj"] Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.825057 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9n2\" (UniqueName: \"kubernetes.io/projected/c5bd7265-3f87-4e7b-9dc1-29e2e99c8771-kube-api-access-2p9n2\") pod \"glance-db-create-rwhgj\" (UID: \"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771\") " pod="openstack/glance-db-create-rwhgj" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.926542 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9n2\" (UniqueName: \"kubernetes.io/projected/c5bd7265-3f87-4e7b-9dc1-29e2e99c8771-kube-api-access-2p9n2\") pod \"glance-db-create-rwhgj\" (UID: \"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771\") " pod="openstack/glance-db-create-rwhgj" Nov 26 07:17:48 crc kubenswrapper[4909]: I1126 07:17:48.949829 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9n2\" (UniqueName: \"kubernetes.io/projected/c5bd7265-3f87-4e7b-9dc1-29e2e99c8771-kube-api-access-2p9n2\") pod \"glance-db-create-rwhgj\" (UID: \"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771\") " pod="openstack/glance-db-create-rwhgj" Nov 26 07:17:49 crc kubenswrapper[4909]: I1126 07:17:49.043136 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rwhgj" Nov 26 07:17:49 crc kubenswrapper[4909]: I1126 07:17:49.271048 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rwhgj"] Nov 26 07:17:49 crc kubenswrapper[4909]: W1126 07:17:49.278828 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5bd7265_3f87_4e7b_9dc1_29e2e99c8771.slice/crio-6595b3f671eb77f4ea6aabaac521f235a2086a647a07c4035ecc33b98725d512 WatchSource:0}: Error finding container 6595b3f671eb77f4ea6aabaac521f235a2086a647a07c4035ecc33b98725d512: Status 404 returned error can't find the container with id 6595b3f671eb77f4ea6aabaac521f235a2086a647a07c4035ecc33b98725d512 Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.145686 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:50 crc kubenswrapper[4909]: E1126 07:17:50.145950 4909 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 07:17:50 crc kubenswrapper[4909]: E1126 07:17:50.145984 4909 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 07:17:50 crc kubenswrapper[4909]: E1126 07:17:50.146065 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift podName:93f8db39-0460-4b6a-89fe-0e9bb565462e nodeName:}" failed. No retries permitted until 2025-11-26 07:17:54.146039729 +0000 UTC m=+1046.292250935 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift") pod "swift-storage-0" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e") : configmap "swift-ring-files" not found Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.203070 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zrrlr"] Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.204881 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrrlr" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.207139 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.208626 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.208877 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.240311 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zrrlr"] Nov 26 07:17:50 crc kubenswrapper[4909]: E1126 07:17:50.241089 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-shq55 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-shq55 ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-zrrlr" podUID="6ae8e3fb-d4dd-446a-8134-eefc9d4ee747" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.253573 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-g4ljp"] Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.254760 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.260152 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zrrlr"] Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.264016 4909 generic.go:334] "Generic (PLEG): container finished" podID="c5bd7265-3f87-4e7b-9dc1-29e2e99c8771" containerID="e481b641f17f22d80faee3fa2370145fafe49f1f7b46a9411e55d35dfb5b767d" exitCode=0 Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.264084 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrrlr" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.264731 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rwhgj" event={"ID":"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771","Type":"ContainerDied","Data":"e481b641f17f22d80faee3fa2370145fafe49f1f7b46a9411e55d35dfb5b767d"} Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.264764 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rwhgj" event={"ID":"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771","Type":"ContainerStarted","Data":"6595b3f671eb77f4ea6aabaac521f235a2086a647a07c4035ecc33b98725d512"} Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.269485 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-g4ljp"] Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.277987 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrrlr" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.349752 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-ring-data-devices\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.349796 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-combined-ca-bundle\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.349882 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-scripts\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.349915 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-swiftconf\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.349933 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkd64\" (UniqueName: \"kubernetes.io/projected/8bb4cd6e-2d04-470f-b900-32a9a30a4137-kube-api-access-mkd64\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.349988 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-dispersionconf\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.350010 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb4cd6e-2d04-470f-b900-32a9a30a4137-etc-swift\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.451516 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-dispersionconf\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.451565 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb4cd6e-2d04-470f-b900-32a9a30a4137-etc-swift\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.451627 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-ring-data-devices\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.451650 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-combined-ca-bundle\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.451696 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-scripts\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.451718 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-swiftconf\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.451733 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkd64\" (UniqueName: \"kubernetes.io/projected/8bb4cd6e-2d04-470f-b900-32a9a30a4137-kube-api-access-mkd64\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.452431 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb4cd6e-2d04-470f-b900-32a9a30a4137-etc-swift\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.452466 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-ring-data-devices\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.452548 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-scripts\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.457553 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-dispersionconf\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.462456 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-swiftconf\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.462458 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-combined-ca-bundle\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.470638 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkd64\" (UniqueName: \"kubernetes.io/projected/8bb4cd6e-2d04-470f-b900-32a9a30a4137-kube-api-access-mkd64\") pod \"swift-ring-rebalance-g4ljp\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:50 crc kubenswrapper[4909]: I1126 07:17:50.570952 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.025433 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-g4ljp"] Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.275146 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-g4ljp" event={"ID":"8bb4cd6e-2d04-470f-b900-32a9a30a4137","Type":"ContainerStarted","Data":"d456a3d039ea5f79872ab8c0128a5d1199e0a0594667b17d958372f9cf111a5a"} Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.275174 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrrlr" Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.329906 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zrrlr"] Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.339796 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-zrrlr"] Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.615835 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rwhgj" Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.774587 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p9n2\" (UniqueName: \"kubernetes.io/projected/c5bd7265-3f87-4e7b-9dc1-29e2e99c8771-kube-api-access-2p9n2\") pod \"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771\" (UID: \"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771\") " Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.782510 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bd7265-3f87-4e7b-9dc1-29e2e99c8771-kube-api-access-2p9n2" (OuterVolumeSpecName: "kube-api-access-2p9n2") pod "c5bd7265-3f87-4e7b-9dc1-29e2e99c8771" (UID: "c5bd7265-3f87-4e7b-9dc1-29e2e99c8771"). InnerVolumeSpecName "kube-api-access-2p9n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:51 crc kubenswrapper[4909]: I1126 07:17:51.876857 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p9n2\" (UniqueName: \"kubernetes.io/projected/c5bd7265-3f87-4e7b-9dc1-29e2e99c8771-kube-api-access-2p9n2\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:52 crc kubenswrapper[4909]: I1126 07:17:52.286800 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rwhgj" event={"ID":"c5bd7265-3f87-4e7b-9dc1-29e2e99c8771","Type":"ContainerDied","Data":"6595b3f671eb77f4ea6aabaac521f235a2086a647a07c4035ecc33b98725d512"} Nov 26 07:17:52 crc kubenswrapper[4909]: I1126 07:17:52.286844 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rwhgj" Nov 26 07:17:52 crc kubenswrapper[4909]: I1126 07:17:52.286843 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6595b3f671eb77f4ea6aabaac521f235a2086a647a07c4035ecc33b98725d512" Nov 26 07:17:52 crc kubenswrapper[4909]: I1126 07:17:52.508443 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae8e3fb-d4dd-446a-8134-eefc9d4ee747" path="/var/lib/kubelet/pods/6ae8e3fb-d4dd-446a-8134-eefc9d4ee747/volumes" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.209886 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2db1-account-create-2rztp"] Nov 26 07:17:53 crc kubenswrapper[4909]: E1126 07:17:53.210389 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5bd7265-3f87-4e7b-9dc1-29e2e99c8771" containerName="mariadb-database-create" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.210422 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5bd7265-3f87-4e7b-9dc1-29e2e99c8771" containerName="mariadb-database-create" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.210719 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5bd7265-3f87-4e7b-9dc1-29e2e99c8771" containerName="mariadb-database-create" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.211615 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2db1-account-create-2rztp" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.214787 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.229611 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2db1-account-create-2rztp"] Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.302467 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc62s\" (UniqueName: \"kubernetes.io/projected/42945d07-91de-4b60-b6a0-e52dffe51a0d-kube-api-access-cc62s\") pod \"keystone-2db1-account-create-2rztp\" (UID: \"42945d07-91de-4b60-b6a0-e52dffe51a0d\") " pod="openstack/keystone-2db1-account-create-2rztp" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.403791 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc62s\" (UniqueName: \"kubernetes.io/projected/42945d07-91de-4b60-b6a0-e52dffe51a0d-kube-api-access-cc62s\") pod \"keystone-2db1-account-create-2rztp\" (UID: \"42945d07-91de-4b60-b6a0-e52dffe51a0d\") " pod="openstack/keystone-2db1-account-create-2rztp" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.428481 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc62s\" (UniqueName: \"kubernetes.io/projected/42945d07-91de-4b60-b6a0-e52dffe51a0d-kube-api-access-cc62s\") pod \"keystone-2db1-account-create-2rztp\" (UID: \"42945d07-91de-4b60-b6a0-e52dffe51a0d\") " pod="openstack/keystone-2db1-account-create-2rztp" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.499528 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e922-account-create-dz958"] Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.500932 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e922-account-create-dz958" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.503261 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.508298 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e922-account-create-dz958"] Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.542768 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2db1-account-create-2rztp" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.607294 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vl6n\" (UniqueName: \"kubernetes.io/projected/4688369a-740e-448c-b5ef-72243cc7597a-kube-api-access-4vl6n\") pod \"placement-e922-account-create-dz958\" (UID: \"4688369a-740e-448c-b5ef-72243cc7597a\") " pod="openstack/placement-e922-account-create-dz958" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.709041 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vl6n\" (UniqueName: \"kubernetes.io/projected/4688369a-740e-448c-b5ef-72243cc7597a-kube-api-access-4vl6n\") pod \"placement-e922-account-create-dz958\" (UID: \"4688369a-740e-448c-b5ef-72243cc7597a\") " pod="openstack/placement-e922-account-create-dz958" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.730163 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vl6n\" (UniqueName: \"kubernetes.io/projected/4688369a-740e-448c-b5ef-72243cc7597a-kube-api-access-4vl6n\") pod \"placement-e922-account-create-dz958\" (UID: \"4688369a-740e-448c-b5ef-72243cc7597a\") " pod="openstack/placement-e922-account-create-dz958" Nov 26 07:17:53 crc kubenswrapper[4909]: I1126 07:17:53.820644 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e922-account-create-dz958" Nov 26 07:17:54 crc kubenswrapper[4909]: I1126 07:17:54.216785 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:17:54 crc kubenswrapper[4909]: E1126 07:17:54.217063 4909 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 07:17:54 crc kubenswrapper[4909]: E1126 07:17:54.217082 4909 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 07:17:54 crc kubenswrapper[4909]: E1126 07:17:54.217128 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift podName:93f8db39-0460-4b6a-89fe-0e9bb565462e nodeName:}" failed. No retries permitted until 2025-11-26 07:18:02.21711193 +0000 UTC m=+1054.363323096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift") pod "swift-storage-0" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e") : configmap "swift-ring-files" not found Nov 26 07:17:54 crc kubenswrapper[4909]: I1126 07:17:54.816887 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.276821 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2db1-account-create-2rztp"] Nov 26 07:17:55 crc kubenswrapper[4909]: W1126 07:17:55.290196 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42945d07_91de_4b60_b6a0_e52dffe51a0d.slice/crio-42539b0986da293cdd93a22509a3867a0e48391d26d7efc98b93003b12e4e2af WatchSource:0}: Error finding container 42539b0986da293cdd93a22509a3867a0e48391d26d7efc98b93003b12e4e2af: Status 404 returned error can't find the container with id 42539b0986da293cdd93a22509a3867a0e48391d26d7efc98b93003b12e4e2af Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.310840 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-g4ljp" event={"ID":"8bb4cd6e-2d04-470f-b900-32a9a30a4137","Type":"ContainerStarted","Data":"8a0d13185b0fd0f077d49e18f1b8a3c5a33b10dd4e5c9d4f488c90bb166a1761"} Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.313372 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2db1-account-create-2rztp" event={"ID":"42945d07-91de-4b60-b6a0-e52dffe51a0d","Type":"ContainerStarted","Data":"42539b0986da293cdd93a22509a3867a0e48391d26d7efc98b93003b12e4e2af"} Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.353792 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e922-account-create-dz958"] Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.354251 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-g4ljp" podStartSLOduration=1.574901715 podStartE2EDuration="5.354223896s" podCreationTimestamp="2025-11-26 07:17:50 +0000 UTC" firstStartedPulling="2025-11-26 07:17:51.032438545 +0000 UTC m=+1043.178649711" lastFinishedPulling="2025-11-26 07:17:54.811760736 +0000 UTC m=+1046.957971892" observedRunningTime="2025-11-26 07:17:55.331363272 +0000 UTC m=+1047.477574458" watchObservedRunningTime="2025-11-26 07:17:55.354223896 +0000 UTC m=+1047.500435052" Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.591227 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.647281 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8ssxt"] Nov 26 07:17:55 crc kubenswrapper[4909]: I1126 07:17:55.647574 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" podUID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerName="dnsmasq-dns" containerID="cri-o://254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a" gracePeriod=10 Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.072979 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.151495 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krzmx\" (UniqueName: \"kubernetes.io/projected/d0b52760-1541-434d-ba86-743f5fbfbcb8-kube-api-access-krzmx\") pod \"d0b52760-1541-434d-ba86-743f5fbfbcb8\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.151611 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-nb\") pod \"d0b52760-1541-434d-ba86-743f5fbfbcb8\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.151749 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-sb\") pod \"d0b52760-1541-434d-ba86-743f5fbfbcb8\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.151820 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-dns-svc\") pod \"d0b52760-1541-434d-ba86-743f5fbfbcb8\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.152006 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-config\") pod \"d0b52760-1541-434d-ba86-743f5fbfbcb8\" (UID: \"d0b52760-1541-434d-ba86-743f5fbfbcb8\") " Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.161949 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b52760-1541-434d-ba86-743f5fbfbcb8-kube-api-access-krzmx" (OuterVolumeSpecName: "kube-api-access-krzmx") pod "d0b52760-1541-434d-ba86-743f5fbfbcb8" (UID: "d0b52760-1541-434d-ba86-743f5fbfbcb8"). InnerVolumeSpecName "kube-api-access-krzmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.195310 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0b52760-1541-434d-ba86-743f5fbfbcb8" (UID: "d0b52760-1541-434d-ba86-743f5fbfbcb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.208438 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-config" (OuterVolumeSpecName: "config") pod "d0b52760-1541-434d-ba86-743f5fbfbcb8" (UID: "d0b52760-1541-434d-ba86-743f5fbfbcb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.210430 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0b52760-1541-434d-ba86-743f5fbfbcb8" (UID: "d0b52760-1541-434d-ba86-743f5fbfbcb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.221912 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0b52760-1541-434d-ba86-743f5fbfbcb8" (UID: "d0b52760-1541-434d-ba86-743f5fbfbcb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.253532 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.253574 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krzmx\" (UniqueName: \"kubernetes.io/projected/d0b52760-1541-434d-ba86-743f5fbfbcb8-kube-api-access-krzmx\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.253588 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.253615 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.253624 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b52760-1541-434d-ba86-743f5fbfbcb8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.344110 4909 generic.go:334] "Generic (PLEG): container finished" podID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerID="254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a" exitCode=0 Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.344182 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" event={"ID":"d0b52760-1541-434d-ba86-743f5fbfbcb8","Type":"ContainerDied","Data":"254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a"} Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.344211 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" event={"ID":"d0b52760-1541-434d-ba86-743f5fbfbcb8","Type":"ContainerDied","Data":"fd3f601979321606154a37409ef0862b5782b88e7f33a459a9739195247ac493"} Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.344237 4909 scope.go:117] "RemoveContainer" containerID="254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.346263 4909 generic.go:334] "Generic (PLEG): container finished" podID="4688369a-740e-448c-b5ef-72243cc7597a" containerID="304b4f863a8089e3faba398f81716b22c7a1e24312716d91f2e8e42dc45b0c88" exitCode=0 Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.346374 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e922-account-create-dz958" event={"ID":"4688369a-740e-448c-b5ef-72243cc7597a","Type":"ContainerDied","Data":"304b4f863a8089e3faba398f81716b22c7a1e24312716d91f2e8e42dc45b0c88"} Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.346404 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e922-account-create-dz958" event={"ID":"4688369a-740e-448c-b5ef-72243cc7597a","Type":"ContainerStarted","Data":"ae0485ac2a9eb7f16720f6a2abf44cb247ec8881e6800d5adb94ccc0ff3c44eb"} Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.346910 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8ssxt" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.347849 4909 generic.go:334] "Generic (PLEG): container finished" podID="42945d07-91de-4b60-b6a0-e52dffe51a0d" containerID="9de267a3ae62263d011dcd2f78926c503195c746bfce60ac6d585cd418181fee" exitCode=0 Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.348042 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2db1-account-create-2rztp" event={"ID":"42945d07-91de-4b60-b6a0-e52dffe51a0d","Type":"ContainerDied","Data":"9de267a3ae62263d011dcd2f78926c503195c746bfce60ac6d585cd418181fee"} Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.371707 4909 scope.go:117] "RemoveContainer" containerID="9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.389965 4909 scope.go:117] "RemoveContainer" containerID="254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a" Nov 26 07:17:56 crc kubenswrapper[4909]: E1126 07:17:56.391421 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a\": container with ID starting with 254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a not found: ID does not exist" containerID="254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.391482 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a"} err="failed to get container status \"254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a\": rpc error: code = NotFound desc = could not find container \"254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a\": container with ID starting with 254b7bfdfeec6975c1bfeb78062502e965cd80ddfea0bb6e6f705a0ef383861a not found: ID does not exist" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.391509 4909 scope.go:117] "RemoveContainer" containerID="9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d" Nov 26 07:17:56 crc kubenswrapper[4909]: E1126 07:17:56.393033 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d\": container with ID starting with 9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d not found: ID does not exist" containerID="9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.393182 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d"} err="failed to get container status \"9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d\": rpc error: code = NotFound desc = could not find container \"9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d\": container with ID starting with 9f3866bf41307ad8d70ca307667303bc90e79b3563b9b1a10f97208562ed0d8d not found: ID does not exist" Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.404115 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8ssxt"] Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.409532 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8ssxt"] Nov 26 07:17:56 crc kubenswrapper[4909]: I1126 07:17:56.507698 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b52760-1541-434d-ba86-743f5fbfbcb8" path="/var/lib/kubelet/pods/d0b52760-1541-434d-ba86-743f5fbfbcb8/volumes" Nov 26 07:17:57 crc kubenswrapper[4909]: I1126 07:17:57.853547 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2db1-account-create-2rztp" Nov 26 07:17:57 crc kubenswrapper[4909]: I1126 07:17:57.859165 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e922-account-create-dz958" Nov 26 07:17:57 crc kubenswrapper[4909]: I1126 07:17:57.998181 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc62s\" (UniqueName: \"kubernetes.io/projected/42945d07-91de-4b60-b6a0-e52dffe51a0d-kube-api-access-cc62s\") pod \"42945d07-91de-4b60-b6a0-e52dffe51a0d\" (UID: \"42945d07-91de-4b60-b6a0-e52dffe51a0d\") " Nov 26 07:17:57 crc kubenswrapper[4909]: I1126 07:17:57.998245 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vl6n\" (UniqueName: \"kubernetes.io/projected/4688369a-740e-448c-b5ef-72243cc7597a-kube-api-access-4vl6n\") pod \"4688369a-740e-448c-b5ef-72243cc7597a\" (UID: \"4688369a-740e-448c-b5ef-72243cc7597a\") " Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.003564 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42945d07-91de-4b60-b6a0-e52dffe51a0d-kube-api-access-cc62s" (OuterVolumeSpecName: "kube-api-access-cc62s") pod "42945d07-91de-4b60-b6a0-e52dffe51a0d" (UID: "42945d07-91de-4b60-b6a0-e52dffe51a0d"). InnerVolumeSpecName "kube-api-access-cc62s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.003635 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4688369a-740e-448c-b5ef-72243cc7597a-kube-api-access-4vl6n" (OuterVolumeSpecName: "kube-api-access-4vl6n") pod "4688369a-740e-448c-b5ef-72243cc7597a" (UID: "4688369a-740e-448c-b5ef-72243cc7597a"). InnerVolumeSpecName "kube-api-access-4vl6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.100648 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc62s\" (UniqueName: \"kubernetes.io/projected/42945d07-91de-4b60-b6a0-e52dffe51a0d-kube-api-access-cc62s\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.100678 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vl6n\" (UniqueName: \"kubernetes.io/projected/4688369a-740e-448c-b5ef-72243cc7597a-kube-api-access-4vl6n\") on node \"crc\" DevicePath \"\"" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.369298 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2db1-account-create-2rztp" event={"ID":"42945d07-91de-4b60-b6a0-e52dffe51a0d","Type":"ContainerDied","Data":"42539b0986da293cdd93a22509a3867a0e48391d26d7efc98b93003b12e4e2af"} Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.369373 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42539b0986da293cdd93a22509a3867a0e48391d26d7efc98b93003b12e4e2af" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.369310 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2db1-account-create-2rztp" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.370728 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e922-account-create-dz958" event={"ID":"4688369a-740e-448c-b5ef-72243cc7597a","Type":"ContainerDied","Data":"ae0485ac2a9eb7f16720f6a2abf44cb247ec8881e6800d5adb94ccc0ff3c44eb"} Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.370756 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae0485ac2a9eb7f16720f6a2abf44cb247ec8881e6800d5adb94ccc0ff3c44eb" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.370783 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e922-account-create-dz958" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.807481 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2354-account-create-b9ncs"] Nov 26 07:17:58 crc kubenswrapper[4909]: E1126 07:17:58.808073 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerName="init" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808084 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerName="init" Nov 26 07:17:58 crc kubenswrapper[4909]: E1126 07:17:58.808093 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerName="dnsmasq-dns" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808098 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerName="dnsmasq-dns" Nov 26 07:17:58 crc kubenswrapper[4909]: E1126 07:17:58.808110 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4688369a-740e-448c-b5ef-72243cc7597a" containerName="mariadb-account-create" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808117 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4688369a-740e-448c-b5ef-72243cc7597a" containerName="mariadb-account-create" Nov 26 07:17:58 crc kubenswrapper[4909]: E1126 07:17:58.808129 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42945d07-91de-4b60-b6a0-e52dffe51a0d" containerName="mariadb-account-create" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808135 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="42945d07-91de-4b60-b6a0-e52dffe51a0d" containerName="mariadb-account-create" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808312 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="42945d07-91de-4b60-b6a0-e52dffe51a0d" containerName="mariadb-account-create" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808328 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4688369a-740e-448c-b5ef-72243cc7597a" containerName="mariadb-account-create" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808337 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b52760-1541-434d-ba86-743f5fbfbcb8" containerName="dnsmasq-dns" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.808825 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2354-account-create-b9ncs" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.810957 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.818510 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2354-account-create-b9ncs"] Nov 26 07:17:58 crc kubenswrapper[4909]: I1126 07:17:58.912479 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqpq\" (UniqueName: \"kubernetes.io/projected/7219c3e0-3c80-4c3a-b0c1-1918cb3980ac-kube-api-access-8sqpq\") pod \"glance-2354-account-create-b9ncs\" (UID: \"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac\") " pod="openstack/glance-2354-account-create-b9ncs" Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.014842 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sqpq\" (UniqueName: \"kubernetes.io/projected/7219c3e0-3c80-4c3a-b0c1-1918cb3980ac-kube-api-access-8sqpq\") pod \"glance-2354-account-create-b9ncs\" (UID: \"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac\") " pod="openstack/glance-2354-account-create-b9ncs" Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.032482 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sqpq\" (UniqueName: \"kubernetes.io/projected/7219c3e0-3c80-4c3a-b0c1-1918cb3980ac-kube-api-access-8sqpq\") pod \"glance-2354-account-create-b9ncs\" (UID: \"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac\") " pod="openstack/glance-2354-account-create-b9ncs" Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.131480 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2354-account-create-b9ncs" Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.402701 4909 generic.go:334] "Generic (PLEG): container finished" podID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerID="d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08" exitCode=0 Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.402778 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"37fbb13e-7e2e-451d-af0e-a648c4cde4c2","Type":"ContainerDied","Data":"d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08"} Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.406651 4909 generic.go:334] "Generic (PLEG): container finished" podID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerID="3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f" exitCode=0 Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.406718 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e827f391-2fcb-4758-ae5e-deef3c712e53","Type":"ContainerDied","Data":"3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f"} Nov 26 07:17:59 crc kubenswrapper[4909]: I1126 07:17:59.551128 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2354-account-create-b9ncs"] Nov 26 07:17:59 crc kubenswrapper[4909]: W1126 07:17:59.559330 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7219c3e0_3c80_4c3a_b0c1_1918cb3980ac.slice/crio-00cb5cd07e0151836efe7001599cc038639d4308b32e519817c3734b4c8eb9ba WatchSource:0}: Error finding container 00cb5cd07e0151836efe7001599cc038639d4308b32e519817c3734b4c8eb9ba: Status 404 returned error can't find the container with id 00cb5cd07e0151836efe7001599cc038639d4308b32e519817c3734b4c8eb9ba Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.416110 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e827f391-2fcb-4758-ae5e-deef3c712e53","Type":"ContainerStarted","Data":"a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32"} Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.416609 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.418094 4909 generic.go:334] "Generic (PLEG): container finished" podID="7219c3e0-3c80-4c3a-b0c1-1918cb3980ac" containerID="96ff5b9f7374832505846555fd743e47ed81c4cea93def2037316c077db458ff" exitCode=0 Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.418169 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2354-account-create-b9ncs" event={"ID":"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac","Type":"ContainerDied","Data":"96ff5b9f7374832505846555fd743e47ed81c4cea93def2037316c077db458ff"} Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.418511 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2354-account-create-b9ncs" event={"ID":"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac","Type":"ContainerStarted","Data":"00cb5cd07e0151836efe7001599cc038639d4308b32e519817c3734b4c8eb9ba"} Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.420314 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"37fbb13e-7e2e-451d-af0e-a648c4cde4c2","Type":"ContainerStarted","Data":"c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe"} Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.420533 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.438616 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.350196326 podStartE2EDuration="1m2.43858223s" podCreationTimestamp="2025-11-26 07:16:58 +0000 UTC" firstStartedPulling="2025-11-26 07:17:00.670264434 +0000 UTC m=+992.816475600" lastFinishedPulling="2025-11-26 07:17:25.758650338 +0000 UTC m=+1017.904861504" observedRunningTime="2025-11-26 07:18:00.437982754 +0000 UTC m=+1052.584193920" watchObservedRunningTime="2025-11-26 07:18:00.43858223 +0000 UTC m=+1052.584793396" Nov 26 07:18:00 crc kubenswrapper[4909]: I1126 07:18:00.469211 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.950698152 podStartE2EDuration="1m2.469190546s" podCreationTimestamp="2025-11-26 07:16:58 +0000 UTC" firstStartedPulling="2025-11-26 07:17:01.046643472 +0000 UTC m=+993.192854638" lastFinishedPulling="2025-11-26 07:17:25.565135856 +0000 UTC m=+1017.711347032" observedRunningTime="2025-11-26 07:18:00.462322738 +0000 UTC m=+1052.608533904" watchObservedRunningTime="2025-11-26 07:18:00.469190546 +0000 UTC m=+1052.615401722" Nov 26 07:18:01 crc kubenswrapper[4909]: I1126 07:18:01.736314 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2354-account-create-b9ncs" Nov 26 07:18:01 crc kubenswrapper[4909]: I1126 07:18:01.866951 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sqpq\" (UniqueName: \"kubernetes.io/projected/7219c3e0-3c80-4c3a-b0c1-1918cb3980ac-kube-api-access-8sqpq\") pod \"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac\" (UID: \"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac\") " Nov 26 07:18:01 crc kubenswrapper[4909]: I1126 07:18:01.875841 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7219c3e0-3c80-4c3a-b0c1-1918cb3980ac-kube-api-access-8sqpq" (OuterVolumeSpecName: "kube-api-access-8sqpq") pod "7219c3e0-3c80-4c3a-b0c1-1918cb3980ac" (UID: "7219c3e0-3c80-4c3a-b0c1-1918cb3980ac"). InnerVolumeSpecName "kube-api-access-8sqpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:01 crc kubenswrapper[4909]: I1126 07:18:01.968710 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sqpq\" (UniqueName: \"kubernetes.io/projected/7219c3e0-3c80-4c3a-b0c1-1918cb3980ac-kube-api-access-8sqpq\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.272663 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.278562 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"swift-storage-0\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " pod="openstack/swift-storage-0" Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.437309 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2354-account-create-b9ncs" event={"ID":"7219c3e0-3c80-4c3a-b0c1-1918cb3980ac","Type":"ContainerDied","Data":"00cb5cd07e0151836efe7001599cc038639d4308b32e519817c3734b4c8eb9ba"} Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.437340 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2354-account-create-b9ncs" Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.437357 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00cb5cd07e0151836efe7001599cc038639d4308b32e519817c3734b4c8eb9ba" Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.439820 4909 generic.go:334] "Generic (PLEG): container finished" podID="8bb4cd6e-2d04-470f-b900-32a9a30a4137" containerID="8a0d13185b0fd0f077d49e18f1b8a3c5a33b10dd4e5c9d4f488c90bb166a1761" exitCode=0 Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.439864 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-g4ljp" event={"ID":"8bb4cd6e-2d04-470f-b900-32a9a30a4137","Type":"ContainerDied","Data":"8a0d13185b0fd0f077d49e18f1b8a3c5a33b10dd4e5c9d4f488c90bb166a1761"} Nov 26 07:18:02 crc kubenswrapper[4909]: I1126 07:18:02.569545 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.148693 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 26 07:18:03 crc kubenswrapper[4909]: W1126 07:18:03.161795 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93f8db39_0460_4b6a_89fe_0e9bb565462e.slice/crio-2a66b364fe99e0d8692030d5621c03464a2d52ffe679e4b466236a36cf795de3 WatchSource:0}: Error finding container 2a66b364fe99e0d8692030d5621c03464a2d52ffe679e4b466236a36cf795de3: Status 404 returned error can't find the container with id 2a66b364fe99e0d8692030d5621c03464a2d52ffe679e4b466236a36cf795de3 Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.448073 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"2a66b364fe99e0d8692030d5621c03464a2d52ffe679e4b466236a36cf795de3"} Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.771893 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.897909 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-swiftconf\") pod \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.897968 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-scripts\") pod \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.898038 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-combined-ca-bundle\") pod \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.898133 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-ring-data-devices\") pod \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.898174 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkd64\" (UniqueName: \"kubernetes.io/projected/8bb4cd6e-2d04-470f-b900-32a9a30a4137-kube-api-access-mkd64\") pod \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.898232 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb4cd6e-2d04-470f-b900-32a9a30a4137-etc-swift\") pod \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.898280 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-dispersionconf\") pod \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\" (UID: \"8bb4cd6e-2d04-470f-b900-32a9a30a4137\") " Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.898966 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8bb4cd6e-2d04-470f-b900-32a9a30a4137" (UID: "8bb4cd6e-2d04-470f-b900-32a9a30a4137"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.899172 4909 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.899201 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb4cd6e-2d04-470f-b900-32a9a30a4137-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8bb4cd6e-2d04-470f-b900-32a9a30a4137" (UID: "8bb4cd6e-2d04-470f-b900-32a9a30a4137"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.903314 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb4cd6e-2d04-470f-b900-32a9a30a4137-kube-api-access-mkd64" (OuterVolumeSpecName: "kube-api-access-mkd64") pod "8bb4cd6e-2d04-470f-b900-32a9a30a4137" (UID: "8bb4cd6e-2d04-470f-b900-32a9a30a4137"). InnerVolumeSpecName "kube-api-access-mkd64". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.922126 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-scripts" (OuterVolumeSpecName: "scripts") pod "8bb4cd6e-2d04-470f-b900-32a9a30a4137" (UID: "8bb4cd6e-2d04-470f-b900-32a9a30a4137"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.926873 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bb4cd6e-2d04-470f-b900-32a9a30a4137" (UID: "8bb4cd6e-2d04-470f-b900-32a9a30a4137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.934040 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8bb4cd6e-2d04-470f-b900-32a9a30a4137" (UID: "8bb4cd6e-2d04-470f-b900-32a9a30a4137"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:03 crc kubenswrapper[4909]: I1126 07:18:03.934258 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8bb4cd6e-2d04-470f-b900-32a9a30a4137" (UID: "8bb4cd6e-2d04-470f-b900-32a9a30a4137"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.000750 4909 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8bb4cd6e-2d04-470f-b900-32a9a30a4137-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.000786 4909 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.000800 4909 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.000811 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb4cd6e-2d04-470f-b900-32a9a30a4137-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.000823 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb4cd6e-2d04-470f-b900-32a9a30a4137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.000835 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkd64\" (UniqueName: \"kubernetes.io/projected/8bb4cd6e-2d04-470f-b900-32a9a30a4137-kube-api-access-mkd64\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.054821 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jlpgv"] Nov 26 07:18:04 crc kubenswrapper[4909]: E1126 07:18:04.055217 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7219c3e0-3c80-4c3a-b0c1-1918cb3980ac" containerName="mariadb-account-create" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.055239 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7219c3e0-3c80-4c3a-b0c1-1918cb3980ac" containerName="mariadb-account-create" Nov 26 07:18:04 crc kubenswrapper[4909]: E1126 07:18:04.055261 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb4cd6e-2d04-470f-b900-32a9a30a4137" containerName="swift-ring-rebalance" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.055269 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb4cd6e-2d04-470f-b900-32a9a30a4137" containerName="swift-ring-rebalance" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.055473 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7219c3e0-3c80-4c3a-b0c1-1918cb3980ac" containerName="mariadb-account-create" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.055505 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb4cd6e-2d04-470f-b900-32a9a30a4137" containerName="swift-ring-rebalance" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.056152 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.057919 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.058941 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mv4fs" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.063453 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jlpgv"] Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.101624 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/b6421e06-7f96-420b-8aa1-04fa59e832e9-kube-api-access-8zq4g\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.101719 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-db-sync-config-data\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.101750 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-config-data\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.101781 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-combined-ca-bundle\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.203718 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-db-sync-config-data\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.203774 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-config-data\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.203819 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-combined-ca-bundle\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.203867 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/b6421e06-7f96-420b-8aa1-04fa59e832e9-kube-api-access-8zq4g\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.209084 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-config-data\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.211443 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-combined-ca-bundle\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.212953 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-db-sync-config-data\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.224115 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/b6421e06-7f96-420b-8aa1-04fa59e832e9-kube-api-access-8zq4g\") pod \"glance-db-sync-jlpgv\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.375555 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.483738 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g4ljp" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.483830 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-g4ljp" event={"ID":"8bb4cd6e-2d04-470f-b900-32a9a30a4137","Type":"ContainerDied","Data":"d456a3d039ea5f79872ab8c0128a5d1199e0a0594667b17d958372f9cf111a5a"} Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.483854 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d456a3d039ea5f79872ab8c0128a5d1199e0a0594667b17d958372f9cf111a5a" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.487376 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"dced4a3ee055a4cc6d79d52944605e70abd5ed1457b4c96ba7b9b9ae67562306"} Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.707307 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jlpgv"] Nov 26 07:18:04 crc kubenswrapper[4909]: W1126 07:18:04.715236 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6421e06_7f96_420b_8aa1_04fa59e832e9.slice/crio-aefd85f4a1de6e2badf3dc343b19ec675e5fd9854143603585be94976745e913 WatchSource:0}: Error finding container aefd85f4a1de6e2badf3dc343b19ec675e5fd9854143603585be94976745e913: Status 404 returned error can't find the container with id aefd85f4a1de6e2badf3dc343b19ec675e5fd9854143603585be94976745e913 Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.825004 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sxvh6" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerName="ovn-controller" probeResult="failure" output=< Nov 26 07:18:04 crc kubenswrapper[4909]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 26 07:18:04 crc kubenswrapper[4909]: > Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.837446 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:18:04 crc kubenswrapper[4909]: I1126 07:18:04.840580 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.078709 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sxvh6-config-d6jww"] Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.082423 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.084272 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.098401 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sxvh6-config-d6jww"] Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.116996 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run-ovn\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.117055 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-additional-scripts\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.117077 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.117098 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-log-ovn\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.117149 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fm42\" (UniqueName: \"kubernetes.io/projected/454959f7-c179-469a-b418-7c025a08139c-kube-api-access-2fm42\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.117191 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-scripts\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.219147 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-additional-scripts\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.219203 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.219270 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-log-ovn\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.219343 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fm42\" (UniqueName: \"kubernetes.io/projected/454959f7-c179-469a-b418-7c025a08139c-kube-api-access-2fm42\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.219403 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-scripts\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.219471 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run-ovn\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.219859 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run-ovn\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.220694 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-additional-scripts\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.220769 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.220817 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-log-ovn\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.223447 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-scripts\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.240680 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fm42\" (UniqueName: \"kubernetes.io/projected/454959f7-c179-469a-b418-7c025a08139c-kube-api-access-2fm42\") pod \"ovn-controller-sxvh6-config-d6jww\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.400073 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.505490 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jlpgv" event={"ID":"b6421e06-7f96-420b-8aa1-04fa59e832e9","Type":"ContainerStarted","Data":"aefd85f4a1de6e2badf3dc343b19ec675e5fd9854143603585be94976745e913"} Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.512186 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"a449d7cd0e0553480c704885c8e18a406ff461623be069faf59ed385c2a89148"} Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.512239 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"ef85ba50ad3703e23f7fcb4391c0f594c7dc9bc10c9b5ed2ff4ec5998223f89c"} Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.512253 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"d349b9ce563e6e2048f46f3884eca2d8e3ba6436ecab095b55cfbdff47ed90e8"} Nov 26 07:18:05 crc kubenswrapper[4909]: I1126 07:18:05.840984 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sxvh6-config-d6jww"] Nov 26 07:18:05 crc kubenswrapper[4909]: W1126 07:18:05.850448 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod454959f7_c179_469a_b418_7c025a08139c.slice/crio-42cfeb73de93aeecc37ebf3ae21069087af7dd521513be466ee8064b96c4e4f3 WatchSource:0}: Error finding container 42cfeb73de93aeecc37ebf3ae21069087af7dd521513be466ee8064b96c4e4f3: Status 404 returned error can't find the container with id 42cfeb73de93aeecc37ebf3ae21069087af7dd521513be466ee8064b96c4e4f3 Nov 26 07:18:06 crc kubenswrapper[4909]: I1126 07:18:06.522795 4909 generic.go:334] "Generic (PLEG): container finished" podID="454959f7-c179-469a-b418-7c025a08139c" containerID="abe8173aaa3344ab6d25c2b1142d4624d7cc8df8e25e8e9e5721a5c2abddfc18" exitCode=0 Nov 26 07:18:06 crc kubenswrapper[4909]: I1126 07:18:06.522886 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sxvh6-config-d6jww" event={"ID":"454959f7-c179-469a-b418-7c025a08139c","Type":"ContainerDied","Data":"abe8173aaa3344ab6d25c2b1142d4624d7cc8df8e25e8e9e5721a5c2abddfc18"} Nov 26 07:18:06 crc kubenswrapper[4909]: I1126 07:18:06.523075 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sxvh6-config-d6jww" event={"ID":"454959f7-c179-469a-b418-7c025a08139c","Type":"ContainerStarted","Data":"42cfeb73de93aeecc37ebf3ae21069087af7dd521513be466ee8064b96c4e4f3"} Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.534650 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"5d4ff632621d60ecaadd162fdb8816be897785eaad8d97513f60206f89fa1487"} Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.535130 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"c76f25e43175f3d693010c16bd1b421da9f361eea4704ff1766122084490d5d8"} Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.859821 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.965618 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-log-ovn\") pod \"454959f7-c179-469a-b418-7c025a08139c\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.965720 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fm42\" (UniqueName: \"kubernetes.io/projected/454959f7-c179-469a-b418-7c025a08139c-kube-api-access-2fm42\") pod \"454959f7-c179-469a-b418-7c025a08139c\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.965783 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "454959f7-c179-469a-b418-7c025a08139c" (UID: "454959f7-c179-469a-b418-7c025a08139c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.965799 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-additional-scripts\") pod \"454959f7-c179-469a-b418-7c025a08139c\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.965876 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run\") pod \"454959f7-c179-469a-b418-7c025a08139c\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966042 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run" (OuterVolumeSpecName: "var-run") pod "454959f7-c179-469a-b418-7c025a08139c" (UID: "454959f7-c179-469a-b418-7c025a08139c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966050 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-scripts\") pod \"454959f7-c179-469a-b418-7c025a08139c\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966122 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run-ovn\") pod \"454959f7-c179-469a-b418-7c025a08139c\" (UID: \"454959f7-c179-469a-b418-7c025a08139c\") " Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966300 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "454959f7-c179-469a-b418-7c025a08139c" (UID: "454959f7-c179-469a-b418-7c025a08139c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966562 4909 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966589 4909 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966619 4909 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/454959f7-c179-469a-b418-7c025a08139c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.966637 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "454959f7-c179-469a-b418-7c025a08139c" (UID: "454959f7-c179-469a-b418-7c025a08139c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.967172 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-scripts" (OuterVolumeSpecName: "scripts") pod "454959f7-c179-469a-b418-7c025a08139c" (UID: "454959f7-c179-469a-b418-7c025a08139c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:07 crc kubenswrapper[4909]: I1126 07:18:07.983728 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454959f7-c179-469a-b418-7c025a08139c-kube-api-access-2fm42" (OuterVolumeSpecName: "kube-api-access-2fm42") pod "454959f7-c179-469a-b418-7c025a08139c" (UID: "454959f7-c179-469a-b418-7c025a08139c"). InnerVolumeSpecName "kube-api-access-2fm42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.068491 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fm42\" (UniqueName: \"kubernetes.io/projected/454959f7-c179-469a-b418-7c025a08139c-kube-api-access-2fm42\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.068542 4909 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.068557 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/454959f7-c179-469a-b418-7c025a08139c-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.553249 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"d57d935982096fce0c90d166aa9755252570903363ed795caa7ea306a1c4a125"} Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.555145 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sxvh6-config-d6jww" event={"ID":"454959f7-c179-469a-b418-7c025a08139c","Type":"ContainerDied","Data":"42cfeb73de93aeecc37ebf3ae21069087af7dd521513be466ee8064b96c4e4f3"} Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.555171 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42cfeb73de93aeecc37ebf3ae21069087af7dd521513be466ee8064b96c4e4f3" Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.555179 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6-config-d6jww" Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.958760 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sxvh6-config-d6jww"] Nov 26 07:18:08 crc kubenswrapper[4909]: I1126 07:18:08.968485 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sxvh6-config-d6jww"] Nov 26 07:18:09 crc kubenswrapper[4909]: I1126 07:18:09.570772 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"12254f31c6a379da5fd4e34c45fd68057888fa099c912fe12dd9c1a881206bdf"} Nov 26 07:18:09 crc kubenswrapper[4909]: I1126 07:18:09.826043 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sxvh6" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.021760 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.298218 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-v68wp"] Nov 26 07:18:10 crc kubenswrapper[4909]: E1126 07:18:10.298919 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454959f7-c179-469a-b418-7c025a08139c" containerName="ovn-config" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.298941 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="454959f7-c179-469a-b418-7c025a08139c" containerName="ovn-config" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.299156 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="454959f7-c179-469a-b418-7c025a08139c" containerName="ovn-config" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.299786 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v68wp" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.307059 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v68wp"] Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.314355 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.393540 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zg6rx"] Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.394543 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zg6rx" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.404966 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zg6rx"] Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.406718 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzq5\" (UniqueName: \"kubernetes.io/projected/031a6940-0a2c-4be2-9601-061ebeac0989-kube-api-access-wjzq5\") pod \"barbican-db-create-v68wp\" (UID: \"031a6940-0a2c-4be2-9601-061ebeac0989\") " pod="openstack/barbican-db-create-v68wp" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.508499 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzq5\" (UniqueName: \"kubernetes.io/projected/031a6940-0a2c-4be2-9601-061ebeac0989-kube-api-access-wjzq5\") pod \"barbican-db-create-v68wp\" (UID: \"031a6940-0a2c-4be2-9601-061ebeac0989\") " pod="openstack/barbican-db-create-v68wp" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.508584 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6m6q\" (UniqueName: \"kubernetes.io/projected/126f2c5e-9f3f-444c-854c-b72d3c16c695-kube-api-access-l6m6q\") pod \"cinder-db-create-zg6rx\" (UID: \"126f2c5e-9f3f-444c-854c-b72d3c16c695\") " pod="openstack/cinder-db-create-zg6rx" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.508832 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454959f7-c179-469a-b418-7c025a08139c" path="/var/lib/kubelet/pods/454959f7-c179-469a-b418-7c025a08139c/volumes" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.509424 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nf9f5"] Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.510367 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nf9f5" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.510943 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nf9f5"] Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.542688 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzq5\" (UniqueName: \"kubernetes.io/projected/031a6940-0a2c-4be2-9601-061ebeac0989-kube-api-access-wjzq5\") pod \"barbican-db-create-v68wp\" (UID: \"031a6940-0a2c-4be2-9601-061ebeac0989\") " pod="openstack/barbican-db-create-v68wp" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.610202 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6m6q\" (UniqueName: \"kubernetes.io/projected/126f2c5e-9f3f-444c-854c-b72d3c16c695-kube-api-access-l6m6q\") pod \"cinder-db-create-zg6rx\" (UID: \"126f2c5e-9f3f-444c-854c-b72d3c16c695\") " pod="openstack/cinder-db-create-zg6rx" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.610290 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5rr\" (UniqueName: \"kubernetes.io/projected/75f5c169-0392-4dbe-91a4-856e444ce6a9-kube-api-access-cx5rr\") pod \"neutron-db-create-nf9f5\" (UID: \"75f5c169-0392-4dbe-91a4-856e444ce6a9\") " pod="openstack/neutron-db-create-nf9f5" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.618361 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v68wp" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.627971 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6m6q\" (UniqueName: \"kubernetes.io/projected/126f2c5e-9f3f-444c-854c-b72d3c16c695-kube-api-access-l6m6q\") pod \"cinder-db-create-zg6rx\" (UID: \"126f2c5e-9f3f-444c-854c-b72d3c16c695\") " pod="openstack/cinder-db-create-zg6rx" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.648463 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-k9bk8"] Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.650323 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.652445 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgdcb" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.652889 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.653068 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.654443 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.675667 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k9bk8"] Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.712152 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5rr\" (UniqueName: \"kubernetes.io/projected/75f5c169-0392-4dbe-91a4-856e444ce6a9-kube-api-access-cx5rr\") pod \"neutron-db-create-nf9f5\" (UID: \"75f5c169-0392-4dbe-91a4-856e444ce6a9\") " pod="openstack/neutron-db-create-nf9f5" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.732270 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5rr\" (UniqueName: \"kubernetes.io/projected/75f5c169-0392-4dbe-91a4-856e444ce6a9-kube-api-access-cx5rr\") pod \"neutron-db-create-nf9f5\" (UID: \"75f5c169-0392-4dbe-91a4-856e444ce6a9\") " pod="openstack/neutron-db-create-nf9f5" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.747337 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zg6rx" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.813615 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbhdv\" (UniqueName: \"kubernetes.io/projected/b2de6571-6dd9-40bc-ad9a-59015c568279-kube-api-access-dbhdv\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.813692 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-config-data\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.813785 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-combined-ca-bundle\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.830419 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nf9f5" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.915430 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-combined-ca-bundle\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.915499 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbhdv\" (UniqueName: \"kubernetes.io/projected/b2de6571-6dd9-40bc-ad9a-59015c568279-kube-api-access-dbhdv\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.915545 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-config-data\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.926712 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-combined-ca-bundle\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.927921 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-config-data\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.940091 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbhdv\" (UniqueName: \"kubernetes.io/projected/b2de6571-6dd9-40bc-ad9a-59015c568279-kube-api-access-dbhdv\") pod \"keystone-db-sync-k9bk8\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:10 crc kubenswrapper[4909]: I1126 07:18:10.977305 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:21 crc kubenswrapper[4909]: E1126 07:18:21.411703 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 26 07:18:21 crc kubenswrapper[4909]: E1126 07:18:21.412765 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zq4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-jlpgv_openstack(b6421e06-7f96-420b-8aa1-04fa59e832e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 07:18:21 crc kubenswrapper[4909]: E1126 07:18:21.414940 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-jlpgv" podUID="b6421e06-7f96-420b-8aa1-04fa59e832e9" Nov 26 07:18:21 crc kubenswrapper[4909]: I1126 07:18:21.676449 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"ae51b3e0f8704221eb8fa99538d9b20411e525c3d485412522af25ca33ee293d"} Nov 26 07:18:21 crc kubenswrapper[4909]: E1126 07:18:21.677475 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-jlpgv" podUID="b6421e06-7f96-420b-8aa1-04fa59e832e9" Nov 26 07:18:21 crc kubenswrapper[4909]: I1126 07:18:21.926661 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nf9f5"] Nov 26 07:18:21 crc kubenswrapper[4909]: W1126 07:18:21.929125 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75f5c169_0392_4dbe_91a4_856e444ce6a9.slice/crio-1cc6ba64c8c4f5d78ea39d14e8e959b05dbf10a76e483cfee0b62ae3a5abce77 WatchSource:0}: Error finding container 1cc6ba64c8c4f5d78ea39d14e8e959b05dbf10a76e483cfee0b62ae3a5abce77: Status 404 returned error can't find the container with id 1cc6ba64c8c4f5d78ea39d14e8e959b05dbf10a76e483cfee0b62ae3a5abce77 Nov 26 07:18:21 crc kubenswrapper[4909]: W1126 07:18:21.931801 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod126f2c5e_9f3f_444c_854c_b72d3c16c695.slice/crio-162dfccc563e7eb8d23c4223ee3f0137ef1f87588dcabeb618f801513e147a70 WatchSource:0}: Error finding container 162dfccc563e7eb8d23c4223ee3f0137ef1f87588dcabeb618f801513e147a70: Status 404 returned error can't find the container with id 162dfccc563e7eb8d23c4223ee3f0137ef1f87588dcabeb618f801513e147a70 Nov 26 07:18:21 crc kubenswrapper[4909]: I1126 07:18:21.933574 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v68wp"] Nov 26 07:18:21 crc kubenswrapper[4909]: W1126 07:18:21.935794 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod031a6940_0a2c_4be2_9601_061ebeac0989.slice/crio-439d67566e29d6f3e9f14faf3012090b15876d6d813cca154407073f09f68b09 WatchSource:0}: Error finding container 439d67566e29d6f3e9f14faf3012090b15876d6d813cca154407073f09f68b09: Status 404 returned error can't find the container with id 439d67566e29d6f3e9f14faf3012090b15876d6d813cca154407073f09f68b09 Nov 26 07:18:21 crc kubenswrapper[4909]: I1126 07:18:21.942509 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zg6rx"] Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.041459 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k9bk8"] Nov 26 07:18:22 crc kubenswrapper[4909]: W1126 07:18:22.062796 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2de6571_6dd9_40bc_ad9a_59015c568279.slice/crio-7c48936ed959d6e8b359f53432b1b5a5a65b5e8eb49738d83739010d43472829 WatchSource:0}: Error finding container 7c48936ed959d6e8b359f53432b1b5a5a65b5e8eb49738d83739010d43472829: Status 404 returned error can't find the container with id 7c48936ed959d6e8b359f53432b1b5a5a65b5e8eb49738d83739010d43472829 Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.693456 4909 generic.go:334] "Generic (PLEG): container finished" podID="126f2c5e-9f3f-444c-854c-b72d3c16c695" containerID="af878b4cd5af5890eb29ddd41d3c62358d147f435921a892c7cd87cef16edc9d" exitCode=0 Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.693526 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zg6rx" event={"ID":"126f2c5e-9f3f-444c-854c-b72d3c16c695","Type":"ContainerDied","Data":"af878b4cd5af5890eb29ddd41d3c62358d147f435921a892c7cd87cef16edc9d"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.693893 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zg6rx" event={"ID":"126f2c5e-9f3f-444c-854c-b72d3c16c695","Type":"ContainerStarted","Data":"162dfccc563e7eb8d23c4223ee3f0137ef1f87588dcabeb618f801513e147a70"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.701356 4909 generic.go:334] "Generic (PLEG): container finished" podID="031a6940-0a2c-4be2-9601-061ebeac0989" containerID="259ac1f9f264c143c227a93f94d048fd7b340bc2d8592bce6cce59d08b832a6e" exitCode=0 Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.701456 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v68wp" event={"ID":"031a6940-0a2c-4be2-9601-061ebeac0989","Type":"ContainerDied","Data":"259ac1f9f264c143c227a93f94d048fd7b340bc2d8592bce6cce59d08b832a6e"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.701484 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v68wp" event={"ID":"031a6940-0a2c-4be2-9601-061ebeac0989","Type":"ContainerStarted","Data":"439d67566e29d6f3e9f14faf3012090b15876d6d813cca154407073f09f68b09"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.715289 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"96997ae8444f96d36126a818d42e9ce0882a0ec678fa1686cadf36da925626d7"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.715338 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"07c32dca92ef9af6a5b2f1da9964db33a8d49c3a4d846c0cb66461ab457f596f"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.715352 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"deb5869801f78aa72238df2b9719a9337500c7d4fe3cef9fd57bfea3f27a9500"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.715363 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"93d0e136e4522423ec6013c050a8ff1959c79f2b6857b7223d3792246312b6bd"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.715376 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"e0d087da0faef2436ea0b5dc36389de6f9bcae11c0745372234e7e2e2515dc1e"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.718825 4909 generic.go:334] "Generic (PLEG): container finished" podID="75f5c169-0392-4dbe-91a4-856e444ce6a9" containerID="7b1150b9f4dbb87e6854cf42f0e7aaa92f775964a5ecab7b7673a2766cabc798" exitCode=0 Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.718916 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nf9f5" event={"ID":"75f5c169-0392-4dbe-91a4-856e444ce6a9","Type":"ContainerDied","Data":"7b1150b9f4dbb87e6854cf42f0e7aaa92f775964a5ecab7b7673a2766cabc798"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.719027 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nf9f5" event={"ID":"75f5c169-0392-4dbe-91a4-856e444ce6a9","Type":"ContainerStarted","Data":"1cc6ba64c8c4f5d78ea39d14e8e959b05dbf10a76e483cfee0b62ae3a5abce77"} Nov 26 07:18:22 crc kubenswrapper[4909]: I1126 07:18:22.724896 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9bk8" event={"ID":"b2de6571-6dd9-40bc-ad9a-59015c568279","Type":"ContainerStarted","Data":"7c48936ed959d6e8b359f53432b1b5a5a65b5e8eb49738d83739010d43472829"} Nov 26 07:18:23 crc kubenswrapper[4909]: I1126 07:18:23.738720 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerStarted","Data":"1b1f9c5a8d3224d9a8311e314bf6dc4a0fbdc6f393e2987d8955a46b68d1bada"} Nov 26 07:18:23 crc kubenswrapper[4909]: I1126 07:18:23.783470 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.500223694 podStartE2EDuration="38.783448522s" podCreationTimestamp="2025-11-26 07:17:45 +0000 UTC" firstStartedPulling="2025-11-26 07:18:03.163656779 +0000 UTC m=+1055.309867945" lastFinishedPulling="2025-11-26 07:18:21.446881597 +0000 UTC m=+1073.593092773" observedRunningTime="2025-11-26 07:18:23.772655977 +0000 UTC m=+1075.918867163" watchObservedRunningTime="2025-11-26 07:18:23.783448522 +0000 UTC m=+1075.929659688" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.139625 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-m7d64"] Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.141709 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.147549 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.167697 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-m7d64"] Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.250364 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnxd\" (UniqueName: \"kubernetes.io/projected/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-kube-api-access-shnxd\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.250574 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-config\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.250641 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.250681 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.250768 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.250818 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.351909 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-config\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.351948 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.351970 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.351997 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.352017 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.352073 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shnxd\" (UniqueName: \"kubernetes.io/projected/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-kube-api-access-shnxd\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.353019 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.353048 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.353049 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.353672 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.353881 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-config\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.370544 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnxd\" (UniqueName: \"kubernetes.io/projected/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-kube-api-access-shnxd\") pod \"dnsmasq-dns-77585f5f8c-m7d64\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:24 crc kubenswrapper[4909]: I1126 07:18:24.472723 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.237920 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v68wp" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.278690 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nf9f5" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.289461 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjzq5\" (UniqueName: \"kubernetes.io/projected/031a6940-0a2c-4be2-9601-061ebeac0989-kube-api-access-wjzq5\") pod \"031a6940-0a2c-4be2-9601-061ebeac0989\" (UID: \"031a6940-0a2c-4be2-9601-061ebeac0989\") " Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.295266 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031a6940-0a2c-4be2-9601-061ebeac0989-kube-api-access-wjzq5" (OuterVolumeSpecName: "kube-api-access-wjzq5") pod "031a6940-0a2c-4be2-9601-061ebeac0989" (UID: "031a6940-0a2c-4be2-9601-061ebeac0989"). InnerVolumeSpecName "kube-api-access-wjzq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.341276 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zg6rx" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.391606 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5rr\" (UniqueName: \"kubernetes.io/projected/75f5c169-0392-4dbe-91a4-856e444ce6a9-kube-api-access-cx5rr\") pod \"75f5c169-0392-4dbe-91a4-856e444ce6a9\" (UID: \"75f5c169-0392-4dbe-91a4-856e444ce6a9\") " Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.391694 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6m6q\" (UniqueName: \"kubernetes.io/projected/126f2c5e-9f3f-444c-854c-b72d3c16c695-kube-api-access-l6m6q\") pod \"126f2c5e-9f3f-444c-854c-b72d3c16c695\" (UID: \"126f2c5e-9f3f-444c-854c-b72d3c16c695\") " Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.392278 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjzq5\" (UniqueName: \"kubernetes.io/projected/031a6940-0a2c-4be2-9601-061ebeac0989-kube-api-access-wjzq5\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.395608 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f5c169-0392-4dbe-91a4-856e444ce6a9-kube-api-access-cx5rr" (OuterVolumeSpecName: "kube-api-access-cx5rr") pod "75f5c169-0392-4dbe-91a4-856e444ce6a9" (UID: "75f5c169-0392-4dbe-91a4-856e444ce6a9"). InnerVolumeSpecName "kube-api-access-cx5rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.398493 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/126f2c5e-9f3f-444c-854c-b72d3c16c695-kube-api-access-l6m6q" (OuterVolumeSpecName: "kube-api-access-l6m6q") pod "126f2c5e-9f3f-444c-854c-b72d3c16c695" (UID: "126f2c5e-9f3f-444c-854c-b72d3c16c695"). InnerVolumeSpecName "kube-api-access-l6m6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.493294 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6m6q\" (UniqueName: \"kubernetes.io/projected/126f2c5e-9f3f-444c-854c-b72d3c16c695-kube-api-access-l6m6q\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.493332 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5rr\" (UniqueName: \"kubernetes.io/projected/75f5c169-0392-4dbe-91a4-856e444ce6a9-kube-api-access-cx5rr\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:26 crc kubenswrapper[4909]: W1126 07:18:26.601350 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ed410ba_27c9_4ec5_9a1f_728fbb095cdd.slice/crio-5049ec1b0204deb9ee603cfb97a5611d014c33df3ab29db0c9146769a281901d WatchSource:0}: Error finding container 5049ec1b0204deb9ee603cfb97a5611d014c33df3ab29db0c9146769a281901d: Status 404 returned error can't find the container with id 5049ec1b0204deb9ee603cfb97a5611d014c33df3ab29db0c9146769a281901d Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.602392 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-m7d64"] Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.784542 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" event={"ID":"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd","Type":"ContainerStarted","Data":"5049ec1b0204deb9ee603cfb97a5611d014c33df3ab29db0c9146769a281901d"} Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.786910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nf9f5" event={"ID":"75f5c169-0392-4dbe-91a4-856e444ce6a9","Type":"ContainerDied","Data":"1cc6ba64c8c4f5d78ea39d14e8e959b05dbf10a76e483cfee0b62ae3a5abce77"} Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.786975 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc6ba64c8c4f5d78ea39d14e8e959b05dbf10a76e483cfee0b62ae3a5abce77" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.786929 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nf9f5" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.789055 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9bk8" event={"ID":"b2de6571-6dd9-40bc-ad9a-59015c568279","Type":"ContainerStarted","Data":"a086419c64150860bdc1ce9fa4c0c19c2999a5f8f64e93307a4393715ab39abc"} Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.794945 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zg6rx" event={"ID":"126f2c5e-9f3f-444c-854c-b72d3c16c695","Type":"ContainerDied","Data":"162dfccc563e7eb8d23c4223ee3f0137ef1f87588dcabeb618f801513e147a70"} Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.794973 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162dfccc563e7eb8d23c4223ee3f0137ef1f87588dcabeb618f801513e147a70" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.795032 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zg6rx" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.799697 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v68wp" event={"ID":"031a6940-0a2c-4be2-9601-061ebeac0989","Type":"ContainerDied","Data":"439d67566e29d6f3e9f14faf3012090b15876d6d813cca154407073f09f68b09"} Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.799748 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439d67566e29d6f3e9f14faf3012090b15876d6d813cca154407073f09f68b09" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.799864 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v68wp" Nov 26 07:18:26 crc kubenswrapper[4909]: I1126 07:18:26.814549 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-k9bk8" podStartSLOduration=12.746289587 podStartE2EDuration="16.814528671s" podCreationTimestamp="2025-11-26 07:18:10 +0000 UTC" firstStartedPulling="2025-11-26 07:18:22.071133312 +0000 UTC m=+1074.217344478" lastFinishedPulling="2025-11-26 07:18:26.139372356 +0000 UTC m=+1078.285583562" observedRunningTime="2025-11-26 07:18:26.809556905 +0000 UTC m=+1078.955768081" watchObservedRunningTime="2025-11-26 07:18:26.814528671 +0000 UTC m=+1078.960739847" Nov 26 07:18:27 crc kubenswrapper[4909]: I1126 07:18:27.809812 4909 generic.go:334] "Generic (PLEG): container finished" podID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerID="3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5" exitCode=0 Nov 26 07:18:27 crc kubenswrapper[4909]: I1126 07:18:27.809881 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" event={"ID":"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd","Type":"ContainerDied","Data":"3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5"} Nov 26 07:18:28 crc kubenswrapper[4909]: I1126 07:18:28.820514 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" event={"ID":"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd","Type":"ContainerStarted","Data":"3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b"} Nov 26 07:18:28 crc kubenswrapper[4909]: I1126 07:18:28.820783 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:28 crc kubenswrapper[4909]: I1126 07:18:28.840863 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" podStartSLOduration=4.840837809 podStartE2EDuration="4.840837809s" podCreationTimestamp="2025-11-26 07:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:28.834089695 +0000 UTC m=+1080.980300881" watchObservedRunningTime="2025-11-26 07:18:28.840837809 +0000 UTC m=+1080.987048995" Nov 26 07:18:30 crc kubenswrapper[4909]: I1126 07:18:30.847746 4909 generic.go:334] "Generic (PLEG): container finished" podID="b2de6571-6dd9-40bc-ad9a-59015c568279" containerID="a086419c64150860bdc1ce9fa4c0c19c2999a5f8f64e93307a4393715ab39abc" exitCode=0 Nov 26 07:18:30 crc kubenswrapper[4909]: I1126 07:18:30.847803 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9bk8" event={"ID":"b2de6571-6dd9-40bc-ad9a-59015c568279","Type":"ContainerDied","Data":"a086419c64150860bdc1ce9fa4c0c19c2999a5f8f64e93307a4393715ab39abc"} Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.230878 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.396452 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbhdv\" (UniqueName: \"kubernetes.io/projected/b2de6571-6dd9-40bc-ad9a-59015c568279-kube-api-access-dbhdv\") pod \"b2de6571-6dd9-40bc-ad9a-59015c568279\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.397118 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-combined-ca-bundle\") pod \"b2de6571-6dd9-40bc-ad9a-59015c568279\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.397160 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-config-data\") pod \"b2de6571-6dd9-40bc-ad9a-59015c568279\" (UID: \"b2de6571-6dd9-40bc-ad9a-59015c568279\") " Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.405747 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2de6571-6dd9-40bc-ad9a-59015c568279-kube-api-access-dbhdv" (OuterVolumeSpecName: "kube-api-access-dbhdv") pod "b2de6571-6dd9-40bc-ad9a-59015c568279" (UID: "b2de6571-6dd9-40bc-ad9a-59015c568279"). InnerVolumeSpecName "kube-api-access-dbhdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.436389 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2de6571-6dd9-40bc-ad9a-59015c568279" (UID: "b2de6571-6dd9-40bc-ad9a-59015c568279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.480537 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-config-data" (OuterVolumeSpecName: "config-data") pod "b2de6571-6dd9-40bc-ad9a-59015c568279" (UID: "b2de6571-6dd9-40bc-ad9a-59015c568279"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.498717 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.499417 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de6571-6dd9-40bc-ad9a-59015c568279-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.499641 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbhdv\" (UniqueName: \"kubernetes.io/projected/b2de6571-6dd9-40bc-ad9a-59015c568279-kube-api-access-dbhdv\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.866761 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9bk8" event={"ID":"b2de6571-6dd9-40bc-ad9a-59015c568279","Type":"ContainerDied","Data":"7c48936ed959d6e8b359f53432b1b5a5a65b5e8eb49738d83739010d43472829"} Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.866797 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c48936ed959d6e8b359f53432b1b5a5a65b5e8eb49738d83739010d43472829" Nov 26 07:18:32 crc kubenswrapper[4909]: I1126 07:18:32.866798 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9bk8" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.135883 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-m7d64"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.136129 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" podUID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerName="dnsmasq-dns" containerID="cri-o://3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b" gracePeriod=10 Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.139847 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.153349 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kkgjx"] Nov 26 07:18:33 crc kubenswrapper[4909]: E1126 07:18:33.153742 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="126f2c5e-9f3f-444c-854c-b72d3c16c695" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.153762 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="126f2c5e-9f3f-444c-854c-b72d3c16c695" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: E1126 07:18:33.153781 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031a6940-0a2c-4be2-9601-061ebeac0989" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.153790 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="031a6940-0a2c-4be2-9601-061ebeac0989" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: E1126 07:18:33.153824 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2de6571-6dd9-40bc-ad9a-59015c568279" containerName="keystone-db-sync" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.153832 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2de6571-6dd9-40bc-ad9a-59015c568279" containerName="keystone-db-sync" Nov 26 07:18:33 crc kubenswrapper[4909]: E1126 07:18:33.153848 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f5c169-0392-4dbe-91a4-856e444ce6a9" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.153855 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f5c169-0392-4dbe-91a4-856e444ce6a9" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.154043 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="031a6940-0a2c-4be2-9601-061ebeac0989" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.154067 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f5c169-0392-4dbe-91a4-856e444ce6a9" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.154083 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="126f2c5e-9f3f-444c-854c-b72d3c16c695" containerName="mariadb-database-create" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.154094 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2de6571-6dd9-40bc-ad9a-59015c568279" containerName="keystone-db-sync" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.154754 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.159391 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.159658 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.160092 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgdcb" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.160233 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.210040 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlf4\" (UniqueName: \"kubernetes.io/projected/54d7e92e-1775-4a5e-b00f-672e3ad05283-kube-api-access-4zlf4\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.210137 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-combined-ca-bundle\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.210187 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-credential-keys\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.210247 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-scripts\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.210271 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-fernet-keys\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.210295 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-config-data\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.223372 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kkgjx"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.238691 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-fvv4p"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.240441 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.251012 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-fvv4p"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314152 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-combined-ca-bundle\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314206 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-credential-keys\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314288 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-svc\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314316 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-scripts\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314338 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-fernet-keys\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314363 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-config-data\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314394 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85lz\" (UniqueName: \"kubernetes.io/projected/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-kube-api-access-g85lz\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314427 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-config\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314453 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314508 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314535 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zlf4\" (UniqueName: \"kubernetes.io/projected/54d7e92e-1775-4a5e-b00f-672e3ad05283-kube-api-access-4zlf4\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.314578 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.320803 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-credential-keys\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.327043 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-scripts\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.327391 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-combined-ca-bundle\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.340122 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.341139 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-config-data\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.353131 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-fernet-keys\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.369977 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zlf4\" (UniqueName: \"kubernetes.io/projected/54d7e92e-1775-4a5e-b00f-672e3ad05283-kube-api-access-4zlf4\") pod \"keystone-bootstrap-kkgjx\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.382414 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.382546 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.385255 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.385904 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.424999 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-svc\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.429481 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85lz\" (UniqueName: \"kubernetes.io/projected/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-kube-api-access-g85lz\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.429564 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-config\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.429620 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.429700 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.429769 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.430788 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.434930 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-config\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.436536 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.442096 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.443363 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-fvv4p"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.444291 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-svc\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: E1126 07:18:33.444571 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dns-svc kube-api-access-g85lz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" podUID="dd6b629b-63cd-4bd8-91c2-f894a123d2c4" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.472491 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85lz\" (UniqueName: \"kubernetes.io/projected/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-kube-api-access-g85lz\") pod \"dnsmasq-dns-55fff446b9-fvv4p\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.474169 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dk2k6"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.475531 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.480245 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.481038 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b9rj4" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.486999 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.498118 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dk2k6"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.513398 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-zsqsd"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.521551 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-zsqsd"] Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.521669 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.531754 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-config-data\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.531967 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-scripts\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.532039 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-log-httpd\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.532209 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.532674 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-run-httpd\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.532714 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.532792 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx88q\" (UniqueName: \"kubernetes.io/projected/e14d933d-2d7e-43cf-a99d-d03035d13522-kube-api-access-qx88q\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.595576 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.633774 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acec782e-7dc5-449d-a3bc-15e6100aa7c6-logs\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.633851 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-run-httpd\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.633963 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrspf\" (UniqueName: \"kubernetes.io/projected/acec782e-7dc5-449d-a3bc-15e6100aa7c6-kube-api-access-zrspf\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.633994 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634021 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634044 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-combined-ca-bundle\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634077 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx88q\" (UniqueName: \"kubernetes.io/projected/e14d933d-2d7e-43cf-a99d-d03035d13522-kube-api-access-qx88q\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634099 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-config-data\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634130 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgjz\" (UniqueName: \"kubernetes.io/projected/fde2ec01-3a8c-4264-b307-7d7ac3682499-kube-api-access-fsgjz\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634153 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634176 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-config\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634199 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-scripts\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634221 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-config-data\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634251 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-scripts\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634273 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-log-httpd\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634293 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634319 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.634347 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.635232 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-run-httpd\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.636842 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-log-httpd\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.639138 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-config-data\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.639586 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.640399 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.641971 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-scripts\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.661324 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx88q\" (UniqueName: \"kubernetes.io/projected/e14d933d-2d7e-43cf-a99d-d03035d13522-kube-api-access-qx88q\") pod \"ceilometer-0\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.720986 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.723955 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.734722 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shnxd\" (UniqueName: \"kubernetes.io/projected/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-kube-api-access-shnxd\") pod \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.734760 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-svc\") pod \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.734820 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-config\") pod \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.734870 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-swift-storage-0\") pod \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.734890 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-sb\") pod \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.734910 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-nb\") pod \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\" (UID: \"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd\") " Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.734992 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735039 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acec782e-7dc5-449d-a3bc-15e6100aa7c6-logs\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735073 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrspf\" (UniqueName: \"kubernetes.io/projected/acec782e-7dc5-449d-a3bc-15e6100aa7c6-kube-api-access-zrspf\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735095 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735111 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-combined-ca-bundle\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735138 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-config-data\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735165 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgjz\" (UniqueName: \"kubernetes.io/projected/fde2ec01-3a8c-4264-b307-7d7ac3682499-kube-api-access-fsgjz\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735195 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735211 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-config\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735229 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-scripts\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.735257 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.736123 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.743227 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.744848 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-kube-api-access-shnxd" (OuterVolumeSpecName: "kube-api-access-shnxd") pod "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" (UID: "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd"). InnerVolumeSpecName "kube-api-access-shnxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.745530 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.746916 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acec782e-7dc5-449d-a3bc-15e6100aa7c6-logs\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.750697 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-config\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.754371 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-combined-ca-bundle\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.755093 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.774177 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-scripts\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.774212 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-config-data\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.785995 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrspf\" (UniqueName: \"kubernetes.io/projected/acec782e-7dc5-449d-a3bc-15e6100aa7c6-kube-api-access-zrspf\") pod \"placement-db-sync-dk2k6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:33 crc kubenswrapper[4909]: I1126 07:18:33.792924 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgjz\" (UniqueName: \"kubernetes.io/projected/fde2ec01-3a8c-4264-b307-7d7ac3682499-kube-api-access-fsgjz\") pod \"dnsmasq-dns-76fcf4b695-zsqsd\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.820131 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-config" (OuterVolumeSpecName: "config") pod "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" (UID: "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.840167 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" (UID: "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.840388 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.840619 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shnxd\" (UniqueName: \"kubernetes.io/projected/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-kube-api-access-shnxd\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.846667 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.851557 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" (UID: "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.857001 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.864340 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" (UID: "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.875226 4909 generic.go:334] "Generic (PLEG): container finished" podID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerID="3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b" exitCode=0 Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.875312 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.875312 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" event={"ID":"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd","Type":"ContainerDied","Data":"3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b"} Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.875374 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" event={"ID":"1ed410ba-27c9-4ec5-9a1f-728fbb095cdd","Type":"ContainerDied","Data":"5049ec1b0204deb9ee603cfb97a5611d014c33df3ab29db0c9146769a281901d"} Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.875399 4909 scope.go:117] "RemoveContainer" containerID="3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.875331 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-m7d64" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.877142 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" (UID: "1ed410ba-27c9-4ec5-9a1f-728fbb095cdd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.883369 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.905241 4909 scope.go:117] "RemoveContainer" containerID="3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941508 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-nb\") pod \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941630 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-svc\") pod \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941673 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-swift-storage-0\") pod \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941707 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-config\") pod \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941723 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-sb\") pod \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941743 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g85lz\" (UniqueName: \"kubernetes.io/projected/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-kube-api-access-g85lz\") pod \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\" (UID: \"dd6b629b-63cd-4bd8-91c2-f894a123d2c4\") " Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941954 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941967 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941976 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.941985 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.943125 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dd6b629b-63cd-4bd8-91c2-f894a123d2c4" (UID: "dd6b629b-63cd-4bd8-91c2-f894a123d2c4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.943777 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-config" (OuterVolumeSpecName: "config") pod "dd6b629b-63cd-4bd8-91c2-f894a123d2c4" (UID: "dd6b629b-63cd-4bd8-91c2-f894a123d2c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.943799 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd6b629b-63cd-4bd8-91c2-f894a123d2c4" (UID: "dd6b629b-63cd-4bd8-91c2-f894a123d2c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.943832 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd6b629b-63cd-4bd8-91c2-f894a123d2c4" (UID: "dd6b629b-63cd-4bd8-91c2-f894a123d2c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.944046 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd6b629b-63cd-4bd8-91c2-f894a123d2c4" (UID: "dd6b629b-63cd-4bd8-91c2-f894a123d2c4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.950763 4909 scope.go:117] "RemoveContainer" containerID="3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.950983 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-kube-api-access-g85lz" (OuterVolumeSpecName: "kube-api-access-g85lz") pod "dd6b629b-63cd-4bd8-91c2-f894a123d2c4" (UID: "dd6b629b-63cd-4bd8-91c2-f894a123d2c4"). InnerVolumeSpecName "kube-api-access-g85lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:34 crc kubenswrapper[4909]: E1126 07:18:33.951230 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b\": container with ID starting with 3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b not found: ID does not exist" containerID="3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.951303 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b"} err="failed to get container status \"3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b\": rpc error: code = NotFound desc = could not find container \"3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b\": container with ID starting with 3fe09b7316d6dd958671e93086e37666d006dd581dd22ec94b2e1446b916927b not found: ID does not exist" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.951369 4909 scope.go:117] "RemoveContainer" containerID="3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5" Nov 26 07:18:34 crc kubenswrapper[4909]: E1126 07:18:33.952099 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5\": container with ID starting with 3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5 not found: ID does not exist" containerID="3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:33.952134 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5"} err="failed to get container status \"3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5\": rpc error: code = NotFound desc = could not find container \"3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5\": container with ID starting with 3c2aed84ce38c8863dcfed3810ba9d72d56a7fe2b53098010fd8deaa8efe64c5 not found: ID does not exist" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.044271 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.044612 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.044628 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.044641 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.044651 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.044663 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g85lz\" (UniqueName: \"kubernetes.io/projected/dd6b629b-63cd-4bd8-91c2-f894a123d2c4-kube-api-access-g85lz\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.213709 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-m7d64"] Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.230154 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-m7d64"] Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.522557 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" path="/var/lib/kubelet/pods/1ed410ba-27c9-4ec5-9a1f-728fbb095cdd/volumes" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.765923 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kkgjx"] Nov 26 07:18:34 crc kubenswrapper[4909]: W1126 07:18:34.781962 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54d7e92e_1775_4a5e_b00f_672e3ad05283.slice/crio-519deed230078daa8f9ba4d06591ecca41f842d85d10123750bca0aa21518448 WatchSource:0}: Error finding container 519deed230078daa8f9ba4d06591ecca41f842d85d10123750bca0aa21518448: Status 404 returned error can't find the container with id 519deed230078daa8f9ba4d06591ecca41f842d85d10123750bca0aa21518448 Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.898127 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dk2k6"] Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.903236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkgjx" event={"ID":"54d7e92e-1775-4a5e-b00f-672e3ad05283","Type":"ContainerStarted","Data":"519deed230078daa8f9ba4d06591ecca41f842d85d10123750bca0aa21518448"} Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.904748 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-fvv4p" Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.904786 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jlpgv" event={"ID":"b6421e06-7f96-420b-8aa1-04fa59e832e9","Type":"ContainerStarted","Data":"6210da3d155444e7f371d4bca257df57852024396456b640445db6f06a1f1fd5"} Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.936874 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.962024 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-zsqsd"] Nov 26 07:18:34 crc kubenswrapper[4909]: I1126 07:18:34.974291 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jlpgv" podStartSLOduration=1.673924779 podStartE2EDuration="30.974230983s" podCreationTimestamp="2025-11-26 07:18:04 +0000 UTC" firstStartedPulling="2025-11-26 07:18:04.717460158 +0000 UTC m=+1056.863671324" lastFinishedPulling="2025-11-26 07:18:34.017766362 +0000 UTC m=+1086.163977528" observedRunningTime="2025-11-26 07:18:34.931548517 +0000 UTC m=+1087.077759703" watchObservedRunningTime="2025-11-26 07:18:34.974230983 +0000 UTC m=+1087.120442149" Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.067881 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-fvv4p"] Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.076327 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-fvv4p"] Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.337386 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.916119 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dk2k6" event={"ID":"acec782e-7dc5-449d-a3bc-15e6100aa7c6","Type":"ContainerStarted","Data":"7494676f328cdc98cb898d52296512d612c47233c4ed3fd908ecbef2f52e67f3"} Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.919479 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerStarted","Data":"542fc56bd9296847cf0b6bdfbb5570ba82aba0b47f58d9df198ac1d096889016"} Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.920863 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkgjx" event={"ID":"54d7e92e-1775-4a5e-b00f-672e3ad05283","Type":"ContainerStarted","Data":"6c56e8b96818340bf2a6e82e312eadfbbb8344acf6710722d0e92ef85ab96ebb"} Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.924063 4909 generic.go:334] "Generic (PLEG): container finished" podID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerID="5e4852f7d478111fe1c268c57eec9fda98fe1fdbf18d00cc9feb59a94e1bec07" exitCode=0 Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.924101 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" event={"ID":"fde2ec01-3a8c-4264-b307-7d7ac3682499","Type":"ContainerDied","Data":"5e4852f7d478111fe1c268c57eec9fda98fe1fdbf18d00cc9feb59a94e1bec07"} Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.924125 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" event={"ID":"fde2ec01-3a8c-4264-b307-7d7ac3682499","Type":"ContainerStarted","Data":"383758dc6cd0be74049c055a459cfdd4286c2749437b5f85e4e54d0b7153cf3b"} Nov 26 07:18:35 crc kubenswrapper[4909]: I1126 07:18:35.948713 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kkgjx" podStartSLOduration=2.948695325 podStartE2EDuration="2.948695325s" podCreationTimestamp="2025-11-26 07:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:35.93682232 +0000 UTC m=+1088.083033486" watchObservedRunningTime="2025-11-26 07:18:35.948695325 +0000 UTC m=+1088.094906501" Nov 26 07:18:36 crc kubenswrapper[4909]: I1126 07:18:36.510036 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6b629b-63cd-4bd8-91c2-f894a123d2c4" path="/var/lib/kubelet/pods/dd6b629b-63cd-4bd8-91c2-f894a123d2c4/volumes" Nov 26 07:18:36 crc kubenswrapper[4909]: I1126 07:18:36.936879 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" event={"ID":"fde2ec01-3a8c-4264-b307-7d7ac3682499","Type":"ContainerStarted","Data":"eb4fe9e36f8c23ce2c0b49c604d2e948bcfd7a88308bfd14ad79330f54d340af"} Nov 26 07:18:36 crc kubenswrapper[4909]: I1126 07:18:36.937263 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:36 crc kubenswrapper[4909]: I1126 07:18:36.964564 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" podStartSLOduration=3.964537667 podStartE2EDuration="3.964537667s" podCreationTimestamp="2025-11-26 07:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:36.955344037 +0000 UTC m=+1089.101555193" watchObservedRunningTime="2025-11-26 07:18:36.964537667 +0000 UTC m=+1089.110748843" Nov 26 07:18:37 crc kubenswrapper[4909]: I1126 07:18:37.300515 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:18:37 crc kubenswrapper[4909]: I1126 07:18:37.300581 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:18:38 crc kubenswrapper[4909]: I1126 07:18:38.955255 4909 generic.go:334] "Generic (PLEG): container finished" podID="54d7e92e-1775-4a5e-b00f-672e3ad05283" containerID="6c56e8b96818340bf2a6e82e312eadfbbb8344acf6710722d0e92ef85ab96ebb" exitCode=0 Nov 26 07:18:38 crc kubenswrapper[4909]: I1126 07:18:38.955329 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkgjx" event={"ID":"54d7e92e-1775-4a5e-b00f-672e3ad05283","Type":"ContainerDied","Data":"6c56e8b96818340bf2a6e82e312eadfbbb8344acf6710722d0e92ef85ab96ebb"} Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.390313 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a478-account-create-75mwn"] Nov 26 07:18:40 crc kubenswrapper[4909]: E1126 07:18:40.392231 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerName="init" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.392248 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerName="init" Nov 26 07:18:40 crc kubenswrapper[4909]: E1126 07:18:40.392265 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerName="dnsmasq-dns" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.392271 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerName="dnsmasq-dns" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.392433 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed410ba-27c9-4ec5-9a1f-728fbb095cdd" containerName="dnsmasq-dns" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.393255 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a478-account-create-75mwn" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.395123 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.396250 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a478-account-create-75mwn"] Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.506092 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8mt\" (UniqueName: \"kubernetes.io/projected/8e888ebe-9e2c-4747-8ecc-e03877820810-kube-api-access-nd8mt\") pod \"barbican-a478-account-create-75mwn\" (UID: \"8e888ebe-9e2c-4747-8ecc-e03877820810\") " pod="openstack/barbican-a478-account-create-75mwn" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.590061 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c218-account-create-kqmfw"] Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.591281 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c218-account-create-kqmfw" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.595504 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.603618 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c218-account-create-kqmfw"] Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.611538 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8mt\" (UniqueName: \"kubernetes.io/projected/8e888ebe-9e2c-4747-8ecc-e03877820810-kube-api-access-nd8mt\") pod \"barbican-a478-account-create-75mwn\" (UID: \"8e888ebe-9e2c-4747-8ecc-e03877820810\") " pod="openstack/barbican-a478-account-create-75mwn" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.635473 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8mt\" (UniqueName: \"kubernetes.io/projected/8e888ebe-9e2c-4747-8ecc-e03877820810-kube-api-access-nd8mt\") pod \"barbican-a478-account-create-75mwn\" (UID: \"8e888ebe-9e2c-4747-8ecc-e03877820810\") " pod="openstack/barbican-a478-account-create-75mwn" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.713249 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fczn\" (UniqueName: \"kubernetes.io/projected/e7876d93-7bc5-407c-b554-da69dbfa93f0-kube-api-access-6fczn\") pod \"cinder-c218-account-create-kqmfw\" (UID: \"e7876d93-7bc5-407c-b554-da69dbfa93f0\") " pod="openstack/cinder-c218-account-create-kqmfw" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.713761 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a478-account-create-75mwn" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.788576 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cf0d-account-create-64lww"] Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.789716 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf0d-account-create-64lww" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.792870 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.796856 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cf0d-account-create-64lww"] Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.816174 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fczn\" (UniqueName: \"kubernetes.io/projected/e7876d93-7bc5-407c-b554-da69dbfa93f0-kube-api-access-6fczn\") pod \"cinder-c218-account-create-kqmfw\" (UID: \"e7876d93-7bc5-407c-b554-da69dbfa93f0\") " pod="openstack/cinder-c218-account-create-kqmfw" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.835938 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fczn\" (UniqueName: \"kubernetes.io/projected/e7876d93-7bc5-407c-b554-da69dbfa93f0-kube-api-access-6fczn\") pod \"cinder-c218-account-create-kqmfw\" (UID: \"e7876d93-7bc5-407c-b554-da69dbfa93f0\") " pod="openstack/cinder-c218-account-create-kqmfw" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.912544 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c218-account-create-kqmfw" Nov 26 07:18:40 crc kubenswrapper[4909]: I1126 07:18:40.917242 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kln9t\" (UniqueName: \"kubernetes.io/projected/b438de63-b387-458f-95d3-16d70d981ba5-kube-api-access-kln9t\") pod \"neutron-cf0d-account-create-64lww\" (UID: \"b438de63-b387-458f-95d3-16d70d981ba5\") " pod="openstack/neutron-cf0d-account-create-64lww" Nov 26 07:18:41 crc kubenswrapper[4909]: I1126 07:18:41.019541 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kln9t\" (UniqueName: \"kubernetes.io/projected/b438de63-b387-458f-95d3-16d70d981ba5-kube-api-access-kln9t\") pod \"neutron-cf0d-account-create-64lww\" (UID: \"b438de63-b387-458f-95d3-16d70d981ba5\") " pod="openstack/neutron-cf0d-account-create-64lww" Nov 26 07:18:41 crc kubenswrapper[4909]: I1126 07:18:41.036974 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kln9t\" (UniqueName: \"kubernetes.io/projected/b438de63-b387-458f-95d3-16d70d981ba5-kube-api-access-kln9t\") pod \"neutron-cf0d-account-create-64lww\" (UID: \"b438de63-b387-458f-95d3-16d70d981ba5\") " pod="openstack/neutron-cf0d-account-create-64lww" Nov 26 07:18:41 crc kubenswrapper[4909]: I1126 07:18:41.120454 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf0d-account-create-64lww" Nov 26 07:18:41 crc kubenswrapper[4909]: I1126 07:18:41.999264 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkgjx" event={"ID":"54d7e92e-1775-4a5e-b00f-672e3ad05283","Type":"ContainerDied","Data":"519deed230078daa8f9ba4d06591ecca41f842d85d10123750bca0aa21518448"} Nov 26 07:18:41 crc kubenswrapper[4909]: I1126 07:18:41.999547 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519deed230078daa8f9ba4d06591ecca41f842d85d10123750bca0aa21518448" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.022196 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.145128 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-config-data\") pod \"54d7e92e-1775-4a5e-b00f-672e3ad05283\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.145392 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-combined-ca-bundle\") pod \"54d7e92e-1775-4a5e-b00f-672e3ad05283\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.145457 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zlf4\" (UniqueName: \"kubernetes.io/projected/54d7e92e-1775-4a5e-b00f-672e3ad05283-kube-api-access-4zlf4\") pod \"54d7e92e-1775-4a5e-b00f-672e3ad05283\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.145508 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-fernet-keys\") pod \"54d7e92e-1775-4a5e-b00f-672e3ad05283\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.145552 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-scripts\") pod \"54d7e92e-1775-4a5e-b00f-672e3ad05283\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.146167 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-credential-keys\") pod \"54d7e92e-1775-4a5e-b00f-672e3ad05283\" (UID: \"54d7e92e-1775-4a5e-b00f-672e3ad05283\") " Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.150352 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d7e92e-1775-4a5e-b00f-672e3ad05283-kube-api-access-4zlf4" (OuterVolumeSpecName: "kube-api-access-4zlf4") pod "54d7e92e-1775-4a5e-b00f-672e3ad05283" (UID: "54d7e92e-1775-4a5e-b00f-672e3ad05283"). InnerVolumeSpecName "kube-api-access-4zlf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.150293 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "54d7e92e-1775-4a5e-b00f-672e3ad05283" (UID: "54d7e92e-1775-4a5e-b00f-672e3ad05283"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.156292 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "54d7e92e-1775-4a5e-b00f-672e3ad05283" (UID: "54d7e92e-1775-4a5e-b00f-672e3ad05283"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.159165 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-scripts" (OuterVolumeSpecName: "scripts") pod "54d7e92e-1775-4a5e-b00f-672e3ad05283" (UID: "54d7e92e-1775-4a5e-b00f-672e3ad05283"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.176759 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54d7e92e-1775-4a5e-b00f-672e3ad05283" (UID: "54d7e92e-1775-4a5e-b00f-672e3ad05283"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.187746 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-config-data" (OuterVolumeSpecName: "config-data") pod "54d7e92e-1775-4a5e-b00f-672e3ad05283" (UID: "54d7e92e-1775-4a5e-b00f-672e3ad05283"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.248507 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zlf4\" (UniqueName: \"kubernetes.io/projected/54d7e92e-1775-4a5e-b00f-672e3ad05283-kube-api-access-4zlf4\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.248545 4909 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.248557 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.248565 4909 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.248573 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.248580 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d7e92e-1775-4a5e-b00f-672e3ad05283-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:42 crc kubenswrapper[4909]: W1126 07:18:42.316134 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb438de63_b387_458f_95d3_16d70d981ba5.slice/crio-3bebf32c61feece13cd147c67f41de449de53d42cde07dbdbbecc35a31bd5468 WatchSource:0}: Error finding container 3bebf32c61feece13cd147c67f41de449de53d42cde07dbdbbecc35a31bd5468: Status 404 returned error can't find the container with id 3bebf32c61feece13cd147c67f41de449de53d42cde07dbdbbecc35a31bd5468 Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.316729 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cf0d-account-create-64lww"] Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.326503 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a478-account-create-75mwn"] Nov 26 07:18:42 crc kubenswrapper[4909]: I1126 07:18:42.335104 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c218-account-create-kqmfw"] Nov 26 07:18:42 crc kubenswrapper[4909]: W1126 07:18:42.338169 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e888ebe_9e2c_4747_8ecc_e03877820810.slice/crio-080357ecb88507f9312e0d9dd73aafb81f3540256d70b7e8d672a9d3c2f7d4fa WatchSource:0}: Error finding container 080357ecb88507f9312e0d9dd73aafb81f3540256d70b7e8d672a9d3c2f7d4fa: Status 404 returned error can't find the container with id 080357ecb88507f9312e0d9dd73aafb81f3540256d70b7e8d672a9d3c2f7d4fa Nov 26 07:18:42 crc kubenswrapper[4909]: W1126 07:18:42.352879 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7876d93_7bc5_407c_b554_da69dbfa93f0.slice/crio-308086b0d2a3d79da6bc3ce9dfba592b4dd0976582001d434ec3430676e0ba83 WatchSource:0}: Error finding container 308086b0d2a3d79da6bc3ce9dfba592b4dd0976582001d434ec3430676e0ba83: Status 404 returned error can't find the container with id 308086b0d2a3d79da6bc3ce9dfba592b4dd0976582001d434ec3430676e0ba83 Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.011925 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dk2k6" event={"ID":"acec782e-7dc5-449d-a3bc-15e6100aa7c6","Type":"ContainerStarted","Data":"7dbcd9530f98b4291aedc42f04a1ccaf1afe80da54613d30af8e73b73490b9c0"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.016673 4909 generic.go:334] "Generic (PLEG): container finished" podID="b438de63-b387-458f-95d3-16d70d981ba5" containerID="28547ad618498ebe7793a9e4cfb0178020778a8b517d6b749e977e83c72864c4" exitCode=0 Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.016742 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf0d-account-create-64lww" event={"ID":"b438de63-b387-458f-95d3-16d70d981ba5","Type":"ContainerDied","Data":"28547ad618498ebe7793a9e4cfb0178020778a8b517d6b749e977e83c72864c4"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.016763 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf0d-account-create-64lww" event={"ID":"b438de63-b387-458f-95d3-16d70d981ba5","Type":"ContainerStarted","Data":"3bebf32c61feece13cd147c67f41de449de53d42cde07dbdbbecc35a31bd5468"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.018647 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerStarted","Data":"4a11fc09105962dfc587407b829c9ca5151ed17190d6a3a5284c96ef34e8d3fe"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.020338 4909 generic.go:334] "Generic (PLEG): container finished" podID="e7876d93-7bc5-407c-b554-da69dbfa93f0" containerID="91d1d8757dc97d8802d5b7224b14de05dd0bc7dc749f4656e24f8e2938bed616" exitCode=0 Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.020378 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c218-account-create-kqmfw" event={"ID":"e7876d93-7bc5-407c-b554-da69dbfa93f0","Type":"ContainerDied","Data":"91d1d8757dc97d8802d5b7224b14de05dd0bc7dc749f4656e24f8e2938bed616"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.020393 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c218-account-create-kqmfw" event={"ID":"e7876d93-7bc5-407c-b554-da69dbfa93f0","Type":"ContainerStarted","Data":"308086b0d2a3d79da6bc3ce9dfba592b4dd0976582001d434ec3430676e0ba83"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.022499 4909 generic.go:334] "Generic (PLEG): container finished" podID="8e888ebe-9e2c-4747-8ecc-e03877820810" containerID="ed9c1df95b174069c67e0e39f7fdb40918daee66332e03c73a3218bd35e863f3" exitCode=0 Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.022623 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkgjx" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.022867 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a478-account-create-75mwn" event={"ID":"8e888ebe-9e2c-4747-8ecc-e03877820810","Type":"ContainerDied","Data":"ed9c1df95b174069c67e0e39f7fdb40918daee66332e03c73a3218bd35e863f3"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.022931 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a478-account-create-75mwn" event={"ID":"8e888ebe-9e2c-4747-8ecc-e03877820810","Type":"ContainerStarted","Data":"080357ecb88507f9312e0d9dd73aafb81f3540256d70b7e8d672a9d3c2f7d4fa"} Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.027778 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dk2k6" podStartSLOduration=3.13149947 podStartE2EDuration="10.027760705s" podCreationTimestamp="2025-11-26 07:18:33 +0000 UTC" firstStartedPulling="2025-11-26 07:18:34.921074001 +0000 UTC m=+1087.067285167" lastFinishedPulling="2025-11-26 07:18:41.817335216 +0000 UTC m=+1093.963546402" observedRunningTime="2025-11-26 07:18:43.02687283 +0000 UTC m=+1095.173083996" watchObservedRunningTime="2025-11-26 07:18:43.027760705 +0000 UTC m=+1095.173971881" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.146293 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kkgjx"] Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.152170 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kkgjx"] Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.256319 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qg72z"] Nov 26 07:18:43 crc kubenswrapper[4909]: E1126 07:18:43.256745 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d7e92e-1775-4a5e-b00f-672e3ad05283" containerName="keystone-bootstrap" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.256771 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d7e92e-1775-4a5e-b00f-672e3ad05283" containerName="keystone-bootstrap" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.257032 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d7e92e-1775-4a5e-b00f-672e3ad05283" containerName="keystone-bootstrap" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.257718 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.259816 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.261015 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.261811 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.262176 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgdcb" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.282729 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qg72z"] Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.367700 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-combined-ca-bundle\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.367763 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-credential-keys\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.367785 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-config-data\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.367810 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-fernet-keys\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.367858 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-scripts\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.367889 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw485\" (UniqueName: \"kubernetes.io/projected/2fe10693-bf37-4079-8917-cb194290cf6b-kube-api-access-pw485\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.469355 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-combined-ca-bundle\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.469746 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-credential-keys\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.469793 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-config-data\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.469840 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-fernet-keys\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.469891 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-scripts\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.469961 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw485\" (UniqueName: \"kubernetes.io/projected/2fe10693-bf37-4079-8917-cb194290cf6b-kube-api-access-pw485\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.474375 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-config-data\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.478150 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-credential-keys\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.479808 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-fernet-keys\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.480452 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-scripts\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.489787 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw485\" (UniqueName: \"kubernetes.io/projected/2fe10693-bf37-4079-8917-cb194290cf6b-kube-api-access-pw485\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.495419 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-combined-ca-bundle\") pod \"keystone-bootstrap-qg72z\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.613344 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.858609 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.919725 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wq7zz"] Nov 26 07:18:43 crc kubenswrapper[4909]: I1126 07:18:43.919958 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-wq7zz" podUID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerName="dnsmasq-dns" containerID="cri-o://563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99" gracePeriod=10 Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.049903 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dk2k6" event={"ID":"acec782e-7dc5-449d-a3bc-15e6100aa7c6","Type":"ContainerDied","Data":"7dbcd9530f98b4291aedc42f04a1ccaf1afe80da54613d30af8e73b73490b9c0"} Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.049880 4909 generic.go:334] "Generic (PLEG): container finished" podID="acec782e-7dc5-449d-a3bc-15e6100aa7c6" containerID="7dbcd9530f98b4291aedc42f04a1ccaf1afe80da54613d30af8e73b73490b9c0" exitCode=0 Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.057174 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerStarted","Data":"df939f2bae7228f4faafee702ecf65f5a69ff7d889fcea6b2068289dd6b3f8fe"} Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.061069 4909 generic.go:334] "Generic (PLEG): container finished" podID="b6421e06-7f96-420b-8aa1-04fa59e832e9" containerID="6210da3d155444e7f371d4bca257df57852024396456b640445db6f06a1f1fd5" exitCode=0 Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.061248 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jlpgv" event={"ID":"b6421e06-7f96-420b-8aa1-04fa59e832e9","Type":"ContainerDied","Data":"6210da3d155444e7f371d4bca257df57852024396456b640445db6f06a1f1fd5"} Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.083771 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qg72z"] Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.395739 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.487574 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-dns-svc\") pod \"90345f3d-54b4-4d46-87b1-df25e4e017b1\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.487699 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-sb\") pod \"90345f3d-54b4-4d46-87b1-df25e4e017b1\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.487747 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-nb\") pod \"90345f3d-54b4-4d46-87b1-df25e4e017b1\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.487804 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6q2v\" (UniqueName: \"kubernetes.io/projected/90345f3d-54b4-4d46-87b1-df25e4e017b1-kube-api-access-h6q2v\") pod \"90345f3d-54b4-4d46-87b1-df25e4e017b1\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.487831 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-config\") pod \"90345f3d-54b4-4d46-87b1-df25e4e017b1\" (UID: \"90345f3d-54b4-4d46-87b1-df25e4e017b1\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.530219 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90345f3d-54b4-4d46-87b1-df25e4e017b1-kube-api-access-h6q2v" (OuterVolumeSpecName: "kube-api-access-h6q2v") pod "90345f3d-54b4-4d46-87b1-df25e4e017b1" (UID: "90345f3d-54b4-4d46-87b1-df25e4e017b1"). InnerVolumeSpecName "kube-api-access-h6q2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.561754 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d7e92e-1775-4a5e-b00f-672e3ad05283" path="/var/lib/kubelet/pods/54d7e92e-1775-4a5e-b00f-672e3ad05283/volumes" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.583681 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90345f3d-54b4-4d46-87b1-df25e4e017b1" (UID: "90345f3d-54b4-4d46-87b1-df25e4e017b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.592456 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.592497 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6q2v\" (UniqueName: \"kubernetes.io/projected/90345f3d-54b4-4d46-87b1-df25e4e017b1-kube-api-access-h6q2v\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.608781 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a478-account-create-75mwn" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.609278 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90345f3d-54b4-4d46-87b1-df25e4e017b1" (UID: "90345f3d-54b4-4d46-87b1-df25e4e017b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.626533 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf0d-account-create-64lww" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.636014 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-config" (OuterVolumeSpecName: "config") pod "90345f3d-54b4-4d46-87b1-df25e4e017b1" (UID: "90345f3d-54b4-4d46-87b1-df25e4e017b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.638474 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90345f3d-54b4-4d46-87b1-df25e4e017b1" (UID: "90345f3d-54b4-4d46-87b1-df25e4e017b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.665770 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c218-account-create-kqmfw" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.693864 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd8mt\" (UniqueName: \"kubernetes.io/projected/8e888ebe-9e2c-4747-8ecc-e03877820810-kube-api-access-nd8mt\") pod \"8e888ebe-9e2c-4747-8ecc-e03877820810\" (UID: \"8e888ebe-9e2c-4747-8ecc-e03877820810\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.693966 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kln9t\" (UniqueName: \"kubernetes.io/projected/b438de63-b387-458f-95d3-16d70d981ba5-kube-api-access-kln9t\") pod \"b438de63-b387-458f-95d3-16d70d981ba5\" (UID: \"b438de63-b387-458f-95d3-16d70d981ba5\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.694384 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.694406 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.694420 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90345f3d-54b4-4d46-87b1-df25e4e017b1-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.697117 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e888ebe-9e2c-4747-8ecc-e03877820810-kube-api-access-nd8mt" (OuterVolumeSpecName: "kube-api-access-nd8mt") pod "8e888ebe-9e2c-4747-8ecc-e03877820810" (UID: "8e888ebe-9e2c-4747-8ecc-e03877820810"). InnerVolumeSpecName "kube-api-access-nd8mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.704014 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b438de63-b387-458f-95d3-16d70d981ba5-kube-api-access-kln9t" (OuterVolumeSpecName: "kube-api-access-kln9t") pod "b438de63-b387-458f-95d3-16d70d981ba5" (UID: "b438de63-b387-458f-95d3-16d70d981ba5"). InnerVolumeSpecName "kube-api-access-kln9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.795479 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fczn\" (UniqueName: \"kubernetes.io/projected/e7876d93-7bc5-407c-b554-da69dbfa93f0-kube-api-access-6fczn\") pod \"e7876d93-7bc5-407c-b554-da69dbfa93f0\" (UID: \"e7876d93-7bc5-407c-b554-da69dbfa93f0\") " Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.805056 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7876d93-7bc5-407c-b554-da69dbfa93f0-kube-api-access-6fczn" (OuterVolumeSpecName: "kube-api-access-6fczn") pod "e7876d93-7bc5-407c-b554-da69dbfa93f0" (UID: "e7876d93-7bc5-407c-b554-da69dbfa93f0"). InnerVolumeSpecName "kube-api-access-6fczn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.805435 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fczn\" (UniqueName: \"kubernetes.io/projected/e7876d93-7bc5-407c-b554-da69dbfa93f0-kube-api-access-6fczn\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.805473 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd8mt\" (UniqueName: \"kubernetes.io/projected/8e888ebe-9e2c-4747-8ecc-e03877820810-kube-api-access-nd8mt\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:44 crc kubenswrapper[4909]: I1126 07:18:44.805489 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kln9t\" (UniqueName: \"kubernetes.io/projected/b438de63-b387-458f-95d3-16d70d981ba5-kube-api-access-kln9t\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.074431 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c218-account-create-kqmfw" event={"ID":"e7876d93-7bc5-407c-b554-da69dbfa93f0","Type":"ContainerDied","Data":"308086b0d2a3d79da6bc3ce9dfba592b4dd0976582001d434ec3430676e0ba83"} Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.074447 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c218-account-create-kqmfw" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.074478 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308086b0d2a3d79da6bc3ce9dfba592b4dd0976582001d434ec3430676e0ba83" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.077970 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a478-account-create-75mwn" event={"ID":"8e888ebe-9e2c-4747-8ecc-e03877820810","Type":"ContainerDied","Data":"080357ecb88507f9312e0d9dd73aafb81f3540256d70b7e8d672a9d3c2f7d4fa"} Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.077992 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="080357ecb88507f9312e0d9dd73aafb81f3540256d70b7e8d672a9d3c2f7d4fa" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.078046 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a478-account-create-75mwn" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.088147 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qg72z" event={"ID":"2fe10693-bf37-4079-8917-cb194290cf6b","Type":"ContainerStarted","Data":"e94e839d97164314b0ed8d5b6e1c88d716cfd42776a9c7b163da0d38843a1b27"} Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.088200 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qg72z" event={"ID":"2fe10693-bf37-4079-8917-cb194290cf6b","Type":"ContainerStarted","Data":"be13f50384a235b87342ee9bc964030d93a2a3cd269978dace29b0fc77a08005"} Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.092087 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf0d-account-create-64lww" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.092675 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf0d-account-create-64lww" event={"ID":"b438de63-b387-458f-95d3-16d70d981ba5","Type":"ContainerDied","Data":"3bebf32c61feece13cd147c67f41de449de53d42cde07dbdbbecc35a31bd5468"} Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.092700 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bebf32c61feece13cd147c67f41de449de53d42cde07dbdbbecc35a31bd5468" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.099927 4909 generic.go:334] "Generic (PLEG): container finished" podID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerID="563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99" exitCode=0 Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.100151 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wq7zz" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.104025 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wq7zz" event={"ID":"90345f3d-54b4-4d46-87b1-df25e4e017b1","Type":"ContainerDied","Data":"563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99"} Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.104077 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wq7zz" event={"ID":"90345f3d-54b4-4d46-87b1-df25e4e017b1","Type":"ContainerDied","Data":"0fdcb0d9c49003ba06546150e1af4df0250f6394a8b353cec99c78390dcaef53"} Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.104104 4909 scope.go:117] "RemoveContainer" containerID="563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.115318 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qg72z" podStartSLOduration=2.115298365 podStartE2EDuration="2.115298365s" podCreationTimestamp="2025-11-26 07:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:45.108224382 +0000 UTC m=+1097.254435578" watchObservedRunningTime="2025-11-26 07:18:45.115298365 +0000 UTC m=+1097.261509531" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.159891 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wq7zz"] Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.174273 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wq7zz"] Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.255127 4909 scope.go:117] "RemoveContainer" containerID="3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.351544 4909 scope.go:117] "RemoveContainer" containerID="563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.352108 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99\": container with ID starting with 563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99 not found: ID does not exist" containerID="563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.352133 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99"} err="failed to get container status \"563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99\": rpc error: code = NotFound desc = could not find container \"563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99\": container with ID starting with 563c94d82883de0e32d6ca58ec641b9061481a7215da2b38fc66d583a7d02d99 not found: ID does not exist" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.352153 4909 scope.go:117] "RemoveContainer" containerID="3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.354073 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c\": container with ID starting with 3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c not found: ID does not exist" containerID="3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.354094 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c"} err="failed to get container status \"3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c\": rpc error: code = NotFound desc = could not find container \"3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c\": container with ID starting with 3763ab134518f05b4ced8e83f5fcd07ca09ca4666c069c56092ecb965117ea1c not found: ID does not exist" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.534316 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.655882 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-config-data\") pod \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.656235 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acec782e-7dc5-449d-a3bc-15e6100aa7c6-logs\") pod \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.656372 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-combined-ca-bundle\") pod \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.656519 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrspf\" (UniqueName: \"kubernetes.io/projected/acec782e-7dc5-449d-a3bc-15e6100aa7c6-kube-api-access-zrspf\") pod \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.656709 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-scripts\") pod \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\" (UID: \"acec782e-7dc5-449d-a3bc-15e6100aa7c6\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.661988 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acec782e-7dc5-449d-a3bc-15e6100aa7c6-logs" (OuterVolumeSpecName: "logs") pod "acec782e-7dc5-449d-a3bc-15e6100aa7c6" (UID: "acec782e-7dc5-449d-a3bc-15e6100aa7c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.664989 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acec782e-7dc5-449d-a3bc-15e6100aa7c6-kube-api-access-zrspf" (OuterVolumeSpecName: "kube-api-access-zrspf") pod "acec782e-7dc5-449d-a3bc-15e6100aa7c6" (UID: "acec782e-7dc5-449d-a3bc-15e6100aa7c6"). InnerVolumeSpecName "kube-api-access-zrspf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.665243 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-scripts" (OuterVolumeSpecName: "scripts") pod "acec782e-7dc5-449d-a3bc-15e6100aa7c6" (UID: "acec782e-7dc5-449d-a3bc-15e6100aa7c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.702725 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-config-data" (OuterVolumeSpecName: "config-data") pod "acec782e-7dc5-449d-a3bc-15e6100aa7c6" (UID: "acec782e-7dc5-449d-a3bc-15e6100aa7c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.703020 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.716912 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acec782e-7dc5-449d-a3bc-15e6100aa7c6" (UID: "acec782e-7dc5-449d-a3bc-15e6100aa7c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.758961 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acec782e-7dc5-449d-a3bc-15e6100aa7c6-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.758991 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.759000 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrspf\" (UniqueName: \"kubernetes.io/projected/acec782e-7dc5-449d-a3bc-15e6100aa7c6-kube-api-access-zrspf\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.759008 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.759015 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acec782e-7dc5-449d-a3bc-15e6100aa7c6-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.861770 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-db-sync-config-data\") pod \"b6421e06-7f96-420b-8aa1-04fa59e832e9\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.861842 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-combined-ca-bundle\") pod \"b6421e06-7f96-420b-8aa1-04fa59e832e9\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.861966 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/b6421e06-7f96-420b-8aa1-04fa59e832e9-kube-api-access-8zq4g\") pod \"b6421e06-7f96-420b-8aa1-04fa59e832e9\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.862031 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-config-data\") pod \"b6421e06-7f96-420b-8aa1-04fa59e832e9\" (UID: \"b6421e06-7f96-420b-8aa1-04fa59e832e9\") " Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.865639 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b6421e06-7f96-420b-8aa1-04fa59e832e9" (UID: "b6421e06-7f96-420b-8aa1-04fa59e832e9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.865788 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6421e06-7f96-420b-8aa1-04fa59e832e9-kube-api-access-8zq4g" (OuterVolumeSpecName: "kube-api-access-8zq4g") pod "b6421e06-7f96-420b-8aa1-04fa59e832e9" (UID: "b6421e06-7f96-420b-8aa1-04fa59e832e9"). InnerVolumeSpecName "kube-api-access-8zq4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.921936 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6421e06-7f96-420b-8aa1-04fa59e832e9" (UID: "b6421e06-7f96-420b-8aa1-04fa59e832e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.957144 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-config-data" (OuterVolumeSpecName: "config-data") pod "b6421e06-7f96-420b-8aa1-04fa59e832e9" (UID: "b6421e06-7f96-420b-8aa1-04fa59e832e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.970754 4909 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.970786 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.970797 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/b6421e06-7f96-420b-8aa1-04fa59e832e9-kube-api-access-8zq4g\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.970807 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6421e06-7f96-420b-8aa1-04fa59e832e9-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978298 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gdl8k"] Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.978629 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acec782e-7dc5-449d-a3bc-15e6100aa7c6" containerName="placement-db-sync" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978643 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="acec782e-7dc5-449d-a3bc-15e6100aa7c6" containerName="placement-db-sync" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.978659 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e888ebe-9e2c-4747-8ecc-e03877820810" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978665 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e888ebe-9e2c-4747-8ecc-e03877820810" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.978684 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerName="dnsmasq-dns" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978690 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerName="dnsmasq-dns" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.978700 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7876d93-7bc5-407c-b554-da69dbfa93f0" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978706 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7876d93-7bc5-407c-b554-da69dbfa93f0" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.978729 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b438de63-b387-458f-95d3-16d70d981ba5" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978736 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b438de63-b387-458f-95d3-16d70d981ba5" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.978744 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerName="init" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978749 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerName="init" Nov 26 07:18:45 crc kubenswrapper[4909]: E1126 07:18:45.978760 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6421e06-7f96-420b-8aa1-04fa59e832e9" containerName="glance-db-sync" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978766 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6421e06-7f96-420b-8aa1-04fa59e832e9" containerName="glance-db-sync" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978904 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="90345f3d-54b4-4d46-87b1-df25e4e017b1" containerName="dnsmasq-dns" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978917 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6421e06-7f96-420b-8aa1-04fa59e832e9" containerName="glance-db-sync" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978925 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b438de63-b387-458f-95d3-16d70d981ba5" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978937 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="acec782e-7dc5-449d-a3bc-15e6100aa7c6" containerName="placement-db-sync" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978947 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e888ebe-9e2c-4747-8ecc-e03877820810" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.978959 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7876d93-7bc5-407c-b554-da69dbfa93f0" containerName="mariadb-account-create" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.979448 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.980954 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.981266 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.981403 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pn9zm" Nov 26 07:18:45 crc kubenswrapper[4909]: I1126 07:18:45.987577 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gdl8k"] Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.129190 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dk2k6" event={"ID":"acec782e-7dc5-449d-a3bc-15e6100aa7c6","Type":"ContainerDied","Data":"7494676f328cdc98cb898d52296512d612c47233c4ed3fd908ecbef2f52e67f3"} Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.129230 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7494676f328cdc98cb898d52296512d612c47233c4ed3fd908ecbef2f52e67f3" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.129281 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dk2k6" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.133729 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jlpgv" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.134743 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jlpgv" event={"ID":"b6421e06-7f96-420b-8aa1-04fa59e832e9","Type":"ContainerDied","Data":"aefd85f4a1de6e2badf3dc343b19ec675e5fd9854143603585be94976745e913"} Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.134786 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aefd85f4a1de6e2badf3dc343b19ec675e5fd9854143603585be94976745e913" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.173000 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-combined-ca-bundle\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.173081 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-config\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.173112 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s85kw\" (UniqueName: \"kubernetes.io/projected/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-kube-api-access-s85kw\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.230693 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5fc4c8f8d8-g2ccp"] Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.232472 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.237188 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.237219 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.237410 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.237496 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.242117 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b9rj4" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.264628 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5fc4c8f8d8-g2ccp"] Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.276934 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-combined-ca-bundle\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.277028 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-config\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.277057 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s85kw\" (UniqueName: \"kubernetes.io/projected/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-kube-api-access-s85kw\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.298151 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-combined-ca-bundle\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.299352 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-config\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.318220 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s85kw\" (UniqueName: \"kubernetes.io/projected/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-kube-api-access-s85kw\") pod \"neutron-db-sync-gdl8k\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.361443 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.378533 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-public-tls-certs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.378840 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-scripts\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.378932 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-combined-ca-bundle\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.379025 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-logs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.379151 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-internal-tls-certs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.379273 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-config-data\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.379332 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4vv\" (UniqueName: \"kubernetes.io/projected/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-kube-api-access-zz4vv\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.455645 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-bd8tw"] Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.457256 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.464324 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-bd8tw"] Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.481282 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-combined-ca-bundle\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.481341 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-logs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.481372 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-internal-tls-certs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.481439 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-config-data\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.481456 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz4vv\" (UniqueName: \"kubernetes.io/projected/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-kube-api-access-zz4vv\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.481506 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-public-tls-certs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.481529 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-scripts\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.494118 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-combined-ca-bundle\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.497904 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-config-data\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.503316 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-internal-tls-certs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.504936 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-logs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.508907 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-public-tls-certs\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.510069 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-scripts\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.510922 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90345f3d-54b4-4d46-87b1-df25e4e017b1" path="/var/lib/kubelet/pods/90345f3d-54b4-4d46-87b1-df25e4e017b1/volumes" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.515352 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz4vv\" (UniqueName: \"kubernetes.io/projected/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-kube-api-access-zz4vv\") pod \"placement-5fc4c8f8d8-g2ccp\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.561202 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.587315 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.587400 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.587428 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-config\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.587450 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.587493 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.587525 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj89h\" (UniqueName: \"kubernetes.io/projected/951c976a-6ae8-4801-8c3c-de061e016828-kube-api-access-kj89h\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.689016 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-config\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.689340 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.689405 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.689470 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj89h\" (UniqueName: \"kubernetes.io/projected/951c976a-6ae8-4801-8c3c-de061e016828-kube-api-access-kj89h\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.689516 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.689628 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.690160 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.690501 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.691096 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-config\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.691385 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.715904 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj89h\" (UniqueName: \"kubernetes.io/projected/951c976a-6ae8-4801-8c3c-de061e016828-kube-api-access-kj89h\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.847133 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-bd8tw\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:46 crc kubenswrapper[4909]: I1126 07:18:46.948675 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gdl8k"] Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.077287 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.150126 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5fc4c8f8d8-g2ccp"] Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.157158 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdl8k" event={"ID":"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7","Type":"ContainerStarted","Data":"bd68ba37677621e28343b3121d374cf4ef77fc7fd6e8738545345d5b5489244c"} Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.367727 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.369711 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.373184 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.373373 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mv4fs" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.373487 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.380054 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.506688 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.506738 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svw8\" (UniqueName: \"kubernetes.io/projected/a920e179-3a15-4052-9f09-8e9264110499-kube-api-access-2svw8\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.506836 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-config-data\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.506886 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.506932 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-scripts\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.506954 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-logs\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.507029 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.584614 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-bd8tw"] Nov 26 07:18:47 crc kubenswrapper[4909]: W1126 07:18:47.603479 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod951c976a_6ae8_4801_8c3c_de061e016828.slice/crio-84313ab6b40d8b2dcbc3929485526324b597c63740760cf2030163f2dc082076 WatchSource:0}: Error finding container 84313ab6b40d8b2dcbc3929485526324b597c63740760cf2030163f2dc082076: Status 404 returned error can't find the container with id 84313ab6b40d8b2dcbc3929485526324b597c63740760cf2030163f2dc082076 Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.607995 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-config-data\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.608149 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.608279 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-scripts\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.608871 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.608885 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-logs\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.609122 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.609200 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.609230 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svw8\" (UniqueName: \"kubernetes.io/projected/a920e179-3a15-4052-9f09-8e9264110499-kube-api-access-2svw8\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.609439 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-logs\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.610244 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.623123 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-config-data\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.623531 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-scripts\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.636324 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.640333 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.641688 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.649824 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.650270 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svw8\" (UniqueName: \"kubernetes.io/projected/a920e179-3a15-4052-9f09-8e9264110499-kube-api-access-2svw8\") pod \"glance-default-external-api-0\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.650312 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.695825 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.711418 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.711491 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.711557 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.711583 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.711615 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.711649 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wpb\" (UniqueName: \"kubernetes.io/projected/312277e7-4018-4a0f-8374-c7f22a6e05f1-kube-api-access-k4wpb\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.711688 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.712031 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.814360 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wpb\" (UniqueName: \"kubernetes.io/projected/312277e7-4018-4a0f-8374-c7f22a6e05f1-kube-api-access-k4wpb\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.815009 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.815041 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.815087 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.815130 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.815156 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.815174 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.815551 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.816789 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.817098 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.821904 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.822395 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.831318 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.843460 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wpb\" (UniqueName: \"kubernetes.io/projected/312277e7-4018-4a0f-8374-c7f22a6e05f1-kube-api-access-k4wpb\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.861733 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:18:47 crc kubenswrapper[4909]: I1126 07:18:47.980271 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:18:48 crc kubenswrapper[4909]: I1126 07:18:48.195664 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" event={"ID":"951c976a-6ae8-4801-8c3c-de061e016828","Type":"ContainerStarted","Data":"84313ab6b40d8b2dcbc3929485526324b597c63740760cf2030163f2dc082076"} Nov 26 07:18:48 crc kubenswrapper[4909]: I1126 07:18:48.200636 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdl8k" event={"ID":"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7","Type":"ContainerStarted","Data":"5e90b4e5ac6667f9daf762f2e6bc66cd6b609799cd175f03b8cc8c7f7b63151c"} Nov 26 07:18:48 crc kubenswrapper[4909]: I1126 07:18:48.206859 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fc4c8f8d8-g2ccp" event={"ID":"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd","Type":"ContainerStarted","Data":"e41849efae31b0c8581f7c9f6ee28750c66a400db628e1800e2864b8a75f5b77"} Nov 26 07:18:48 crc kubenswrapper[4909]: I1126 07:18:48.206896 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fc4c8f8d8-g2ccp" event={"ID":"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd","Type":"ContainerStarted","Data":"d1360c0fbd5ca81e6923bb6f1040c56e102db54539a428875d4a98ea9d046a6d"} Nov 26 07:18:48 crc kubenswrapper[4909]: I1126 07:18:48.221714 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gdl8k" podStartSLOduration=3.221700082 podStartE2EDuration="3.221700082s" podCreationTimestamp="2025-11-26 07:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:48.219473301 +0000 UTC m=+1100.365684477" watchObservedRunningTime="2025-11-26 07:18:48.221700082 +0000 UTC m=+1100.367911248" Nov 26 07:18:49 crc kubenswrapper[4909]: I1126 07:18:49.221906 4909 generic.go:334] "Generic (PLEG): container finished" podID="2fe10693-bf37-4079-8917-cb194290cf6b" containerID="e94e839d97164314b0ed8d5b6e1c88d716cfd42776a9c7b163da0d38843a1b27" exitCode=0 Nov 26 07:18:49 crc kubenswrapper[4909]: I1126 07:18:49.222633 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qg72z" event={"ID":"2fe10693-bf37-4079-8917-cb194290cf6b","Type":"ContainerDied","Data":"e94e839d97164314b0ed8d5b6e1c88d716cfd42776a9c7b163da0d38843a1b27"} Nov 26 07:18:49 crc kubenswrapper[4909]: I1126 07:18:49.456767 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:18:49 crc kubenswrapper[4909]: I1126 07:18:49.533284 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.904234 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kfqlk"] Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.905760 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.908151 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9b2v6" Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.909306 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.914309 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pp2qf"] Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.933870 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kfqlk"] Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.934016 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.938573 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.938824 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vhhwz" Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.939465 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 26 07:18:50 crc kubenswrapper[4909]: I1126 07:18:50.969735 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pp2qf"] Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.006167 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-combined-ca-bundle\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.006233 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-db-sync-config-data\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.006437 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmv2c\" (UniqueName: \"kubernetes.io/projected/a4bc32bf-2659-4f99-bb10-8ac0617b317c-kube-api-access-qmv2c\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108057 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrlsz\" (UniqueName: \"kubernetes.io/projected/94fb3d6d-c540-4c6d-af4d-257226561c47-kube-api-access-lrlsz\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108131 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-combined-ca-bundle\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108178 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-db-sync-config-data\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108207 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-db-sync-config-data\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108264 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-config-data\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108292 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/94fb3d6d-c540-4c6d-af4d-257226561c47-etc-machine-id\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108317 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-scripts\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108348 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmv2c\" (UniqueName: \"kubernetes.io/projected/a4bc32bf-2659-4f99-bb10-8ac0617b317c-kube-api-access-qmv2c\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.108563 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-combined-ca-bundle\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.122546 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-db-sync-config-data\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.122651 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-combined-ca-bundle\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.126380 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmv2c\" (UniqueName: \"kubernetes.io/projected/a4bc32bf-2659-4f99-bb10-8ac0617b317c-kube-api-access-qmv2c\") pod \"barbican-db-sync-kfqlk\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.210393 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-combined-ca-bundle\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.210499 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrlsz\" (UniqueName: \"kubernetes.io/projected/94fb3d6d-c540-4c6d-af4d-257226561c47-kube-api-access-lrlsz\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.210548 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-db-sync-config-data\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.210613 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-config-data\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.210639 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/94fb3d6d-c540-4c6d-af4d-257226561c47-etc-machine-id\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.210654 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-scripts\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.213444 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/94fb3d6d-c540-4c6d-af4d-257226561c47-etc-machine-id\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.217237 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-combined-ca-bundle\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.220495 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-scripts\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.222914 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-config-data\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.229003 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.233902 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-db-sync-config-data\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.251416 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrlsz\" (UniqueName: \"kubernetes.io/projected/94fb3d6d-c540-4c6d-af4d-257226561c47-kube-api-access-lrlsz\") pod \"cinder-db-sync-pp2qf\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.253346 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.625100 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.721183 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-fernet-keys\") pod \"2fe10693-bf37-4079-8917-cb194290cf6b\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.721250 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-combined-ca-bundle\") pod \"2fe10693-bf37-4079-8917-cb194290cf6b\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.721294 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-config-data\") pod \"2fe10693-bf37-4079-8917-cb194290cf6b\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.721317 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-scripts\") pod \"2fe10693-bf37-4079-8917-cb194290cf6b\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.721352 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw485\" (UniqueName: \"kubernetes.io/projected/2fe10693-bf37-4079-8917-cb194290cf6b-kube-api-access-pw485\") pod \"2fe10693-bf37-4079-8917-cb194290cf6b\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.721513 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-credential-keys\") pod \"2fe10693-bf37-4079-8917-cb194290cf6b\" (UID: \"2fe10693-bf37-4079-8917-cb194290cf6b\") " Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.728788 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2fe10693-bf37-4079-8917-cb194290cf6b" (UID: "2fe10693-bf37-4079-8917-cb194290cf6b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.728934 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-scripts" (OuterVolumeSpecName: "scripts") pod "2fe10693-bf37-4079-8917-cb194290cf6b" (UID: "2fe10693-bf37-4079-8917-cb194290cf6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.738744 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2fe10693-bf37-4079-8917-cb194290cf6b" (UID: "2fe10693-bf37-4079-8917-cb194290cf6b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.740945 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe10693-bf37-4079-8917-cb194290cf6b-kube-api-access-pw485" (OuterVolumeSpecName: "kube-api-access-pw485") pod "2fe10693-bf37-4079-8917-cb194290cf6b" (UID: "2fe10693-bf37-4079-8917-cb194290cf6b"). InnerVolumeSpecName "kube-api-access-pw485". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.809713 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-config-data" (OuterVolumeSpecName: "config-data") pod "2fe10693-bf37-4079-8917-cb194290cf6b" (UID: "2fe10693-bf37-4079-8917-cb194290cf6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.812250 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fe10693-bf37-4079-8917-cb194290cf6b" (UID: "2fe10693-bf37-4079-8917-cb194290cf6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.824941 4909 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.824971 4909 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.824980 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.824989 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.824997 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe10693-bf37-4079-8917-cb194290cf6b-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:51 crc kubenswrapper[4909]: I1126 07:18:51.825004 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw485\" (UniqueName: \"kubernetes.io/projected/2fe10693-bf37-4079-8917-cb194290cf6b-kube-api-access-pw485\") on node \"crc\" DevicePath \"\"" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.157324 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kfqlk"] Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.221938 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pp2qf"] Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.271935 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kfqlk" event={"ID":"a4bc32bf-2659-4f99-bb10-8ac0617b317c","Type":"ContainerStarted","Data":"d790075b4931de205dcac4b3be146ff496cb6953a241dc4179cbc41f0a95a59b"} Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.275207 4909 generic.go:334] "Generic (PLEG): container finished" podID="951c976a-6ae8-4801-8c3c-de061e016828" containerID="1b157177c000c40a6da3a9662a9c02a2e5fdf1ff9e3f3f039271b3801854b02c" exitCode=0 Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.275285 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" event={"ID":"951c976a-6ae8-4801-8c3c-de061e016828","Type":"ContainerDied","Data":"1b157177c000c40a6da3a9662a9c02a2e5fdf1ff9e3f3f039271b3801854b02c"} Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.278765 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fc4c8f8d8-g2ccp" event={"ID":"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd","Type":"ContainerStarted","Data":"a05a2e8d981ebb4cc5877598dba394fd26e24d76c6a72edee8536bc2f0214b86"} Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.278961 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.279293 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.290889 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pp2qf" event={"ID":"94fb3d6d-c540-4c6d-af4d-257226561c47","Type":"ContainerStarted","Data":"b2e9973be13dfb22210d57d93d1ec344ebe3743456b477a4b32c2ef95f2d58c9"} Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.293014 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qg72z" event={"ID":"2fe10693-bf37-4079-8917-cb194290cf6b","Type":"ContainerDied","Data":"be13f50384a235b87342ee9bc964030d93a2a3cd269978dace29b0fc77a08005"} Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.293056 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be13f50384a235b87342ee9bc964030d93a2a3cd269978dace29b0fc77a08005" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.293126 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qg72z" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.312097 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerStarted","Data":"d4f59cbade1571411da46a6ceab0e818bcec0284f7e08b87511ab66230981717"} Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.346772 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5fc4c8f8d8-g2ccp" podStartSLOduration=6.346755917 podStartE2EDuration="6.346755917s" podCreationTimestamp="2025-11-26 07:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:52.334047391 +0000 UTC m=+1104.480258557" watchObservedRunningTime="2025-11-26 07:18:52.346755917 +0000 UTC m=+1104.492967103" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.368202 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.725890 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8646886cd-cj5pc"] Nov 26 07:18:52 crc kubenswrapper[4909]: E1126 07:18:52.726917 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe10693-bf37-4079-8917-cb194290cf6b" containerName="keystone-bootstrap" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.726943 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe10693-bf37-4079-8917-cb194290cf6b" containerName="keystone-bootstrap" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.727168 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe10693-bf37-4079-8917-cb194290cf6b" containerName="keystone-bootstrap" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.727738 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.732406 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.732561 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.732701 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.732831 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.732984 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.733097 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgdcb" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.743408 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8646886cd-cj5pc"] Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846058 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqr78\" (UniqueName: \"kubernetes.io/projected/b0ef7a35-86f9-4afc-9529-ff707ba448a9-kube-api-access-lqr78\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846133 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-scripts\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846164 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-combined-ca-bundle\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846182 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-fernet-keys\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846203 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-credential-keys\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846223 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-config-data\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846421 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-public-tls-certs\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.846492 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-internal-tls-certs\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: W1126 07:18:52.858400 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda920e179_3a15_4052_9f09_8e9264110499.slice/crio-c6a296e4fe1a98cb5e34e25847d6ebcda0c0a6b535c91359dd3f3124e0207ab0 WatchSource:0}: Error finding container c6a296e4fe1a98cb5e34e25847d6ebcda0c0a6b535c91359dd3f3124e0207ab0: Status 404 returned error can't find the container with id c6a296e4fe1a98cb5e34e25847d6ebcda0c0a6b535c91359dd3f3124e0207ab0 Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953360 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-internal-tls-certs\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953435 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqr78\" (UniqueName: \"kubernetes.io/projected/b0ef7a35-86f9-4afc-9529-ff707ba448a9-kube-api-access-lqr78\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953510 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-scripts\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953552 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-combined-ca-bundle\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953610 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-fernet-keys\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953647 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-credential-keys\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953684 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-config-data\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.953968 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-public-tls-certs\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.959030 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-internal-tls-certs\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.962377 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-credential-keys\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.963138 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-public-tls-certs\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.964259 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-combined-ca-bundle\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.964741 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-config-data\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:52 crc kubenswrapper[4909]: I1126 07:18:52.965130 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-fernet-keys\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.004578 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqr78\" (UniqueName: \"kubernetes.io/projected/b0ef7a35-86f9-4afc-9529-ff707ba448a9-kube-api-access-lqr78\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.019317 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-scripts\") pod \"keystone-8646886cd-cj5pc\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.048465 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.118329 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.330903 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a920e179-3a15-4052-9f09-8e9264110499","Type":"ContainerStarted","Data":"c6a296e4fe1a98cb5e34e25847d6ebcda0c0a6b535c91359dd3f3124e0207ab0"} Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.335616 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"312277e7-4018-4a0f-8374-c7f22a6e05f1","Type":"ContainerStarted","Data":"142ce54854b3361482a2cc2ca5a024e52b0a9f2e4106947431b35158a256b93c"} Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.339116 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" event={"ID":"951c976a-6ae8-4801-8c3c-de061e016828","Type":"ContainerStarted","Data":"456a094735a54772ec296b5e866c2151739dd521bfe320f46949d078901921bf"} Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.361838 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" podStartSLOduration=7.361821719 podStartE2EDuration="7.361821719s" podCreationTimestamp="2025-11-26 07:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:53.359810704 +0000 UTC m=+1105.506021870" watchObservedRunningTime="2025-11-26 07:18:53.361821719 +0000 UTC m=+1105.508032885" Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.450025 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:18:53 crc kubenswrapper[4909]: I1126 07:18:53.556527 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8646886cd-cj5pc"] Nov 26 07:18:54 crc kubenswrapper[4909]: I1126 07:18:54.391385 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"312277e7-4018-4a0f-8374-c7f22a6e05f1","Type":"ContainerStarted","Data":"8b7232feea8f70e385c3485ca1607c7a2b6ba407f57eec36464284590c50f91a"} Nov 26 07:18:54 crc kubenswrapper[4909]: I1126 07:18:54.394908 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8646886cd-cj5pc" event={"ID":"b0ef7a35-86f9-4afc-9529-ff707ba448a9","Type":"ContainerStarted","Data":"c7815d86ff25c599f9e26f760ca58bbfc89cea51769f7eddc87a7472792ccca9"} Nov 26 07:18:54 crc kubenswrapper[4909]: I1126 07:18:54.394951 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8646886cd-cj5pc" event={"ID":"b0ef7a35-86f9-4afc-9529-ff707ba448a9","Type":"ContainerStarted","Data":"e46dd967ada859c733ba0187dec7a10556ed4153b68da348e27a1716bf2ac61f"} Nov 26 07:18:54 crc kubenswrapper[4909]: I1126 07:18:54.395109 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:18:54 crc kubenswrapper[4909]: I1126 07:18:54.398911 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a920e179-3a15-4052-9f09-8e9264110499","Type":"ContainerStarted","Data":"93474ad5c7d49c7d192de04aca05929f6481ee988f81b8b1d50cd54f7f05d84a"} Nov 26 07:18:54 crc kubenswrapper[4909]: I1126 07:18:54.399434 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:18:54 crc kubenswrapper[4909]: I1126 07:18:54.419914 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8646886cd-cj5pc" podStartSLOduration=2.419892705 podStartE2EDuration="2.419892705s" podCreationTimestamp="2025-11-26 07:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:54.415064124 +0000 UTC m=+1106.561275290" watchObservedRunningTime="2025-11-26 07:18:54.419892705 +0000 UTC m=+1106.566103871" Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.412083 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a920e179-3a15-4052-9f09-8e9264110499","Type":"ContainerStarted","Data":"906c19152db404a652ac530adf1594ba8acb23ff35d0f1fc1212ea8033d5b279"} Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.412193 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-log" containerID="cri-o://93474ad5c7d49c7d192de04aca05929f6481ee988f81b8b1d50cd54f7f05d84a" gracePeriod=30 Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.412252 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-httpd" containerID="cri-o://906c19152db404a652ac530adf1594ba8acb23ff35d0f1fc1212ea8033d5b279" gracePeriod=30 Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.416134 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"312277e7-4018-4a0f-8374-c7f22a6e05f1","Type":"ContainerStarted","Data":"3d3e1a9e82b8e1afe632eff02f50ae8a31ac118df04f1881c291edd65a09e51f"} Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.416547 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-log" containerID="cri-o://8b7232feea8f70e385c3485ca1607c7a2b6ba407f57eec36464284590c50f91a" gracePeriod=30 Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.416552 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-httpd" containerID="cri-o://3d3e1a9e82b8e1afe632eff02f50ae8a31ac118df04f1881c291edd65a09e51f" gracePeriod=30 Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.442376 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.442358949 podStartE2EDuration="9.442358949s" podCreationTimestamp="2025-11-26 07:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:55.430826204 +0000 UTC m=+1107.577037370" watchObservedRunningTime="2025-11-26 07:18:55.442358949 +0000 UTC m=+1107.588570115" Nov 26 07:18:55 crc kubenswrapper[4909]: I1126 07:18:55.460828 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.460800203 podStartE2EDuration="9.460800203s" podCreationTimestamp="2025-11-26 07:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:18:55.453770591 +0000 UTC m=+1107.599981757" watchObservedRunningTime="2025-11-26 07:18:55.460800203 +0000 UTC m=+1107.607011379" Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.431033 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a920e179-3a15-4052-9f09-8e9264110499","Type":"ContainerDied","Data":"906c19152db404a652ac530adf1594ba8acb23ff35d0f1fc1212ea8033d5b279"} Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.431020 4909 generic.go:334] "Generic (PLEG): container finished" podID="a920e179-3a15-4052-9f09-8e9264110499" containerID="906c19152db404a652ac530adf1594ba8acb23ff35d0f1fc1212ea8033d5b279" exitCode=0 Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.431095 4909 generic.go:334] "Generic (PLEG): container finished" podID="a920e179-3a15-4052-9f09-8e9264110499" containerID="93474ad5c7d49c7d192de04aca05929f6481ee988f81b8b1d50cd54f7f05d84a" exitCode=143 Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.431148 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a920e179-3a15-4052-9f09-8e9264110499","Type":"ContainerDied","Data":"93474ad5c7d49c7d192de04aca05929f6481ee988f81b8b1d50cd54f7f05d84a"} Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.434807 4909 generic.go:334] "Generic (PLEG): container finished" podID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerID="3d3e1a9e82b8e1afe632eff02f50ae8a31ac118df04f1881c291edd65a09e51f" exitCode=0 Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.434835 4909 generic.go:334] "Generic (PLEG): container finished" podID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerID="8b7232feea8f70e385c3485ca1607c7a2b6ba407f57eec36464284590c50f91a" exitCode=143 Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.434859 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"312277e7-4018-4a0f-8374-c7f22a6e05f1","Type":"ContainerDied","Data":"3d3e1a9e82b8e1afe632eff02f50ae8a31ac118df04f1881c291edd65a09e51f"} Nov 26 07:18:56 crc kubenswrapper[4909]: I1126 07:18:56.434889 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"312277e7-4018-4a0f-8374-c7f22a6e05f1","Type":"ContainerDied","Data":"8b7232feea8f70e385c3485ca1607c7a2b6ba407f57eec36464284590c50f91a"} Nov 26 07:19:02 crc kubenswrapper[4909]: I1126 07:19:02.079632 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:19:02 crc kubenswrapper[4909]: I1126 07:19:02.146675 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-zsqsd"] Nov 26 07:19:02 crc kubenswrapper[4909]: I1126 07:19:02.146938 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="dnsmasq-dns" containerID="cri-o://eb4fe9e36f8c23ce2c0b49c604d2e948bcfd7a88308bfd14ad79330f54d340af" gracePeriod=10 Nov 26 07:19:02 crc kubenswrapper[4909]: I1126 07:19:02.508715 4909 generic.go:334] "Generic (PLEG): container finished" podID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerID="eb4fe9e36f8c23ce2c0b49c604d2e948bcfd7a88308bfd14ad79330f54d340af" exitCode=0 Nov 26 07:19:02 crc kubenswrapper[4909]: I1126 07:19:02.510398 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" event={"ID":"fde2ec01-3a8c-4264-b307-7d7ac3682499","Type":"ContainerDied","Data":"eb4fe9e36f8c23ce2c0b49c604d2e948bcfd7a88308bfd14ad79330f54d340af"} Nov 26 07:19:03 crc kubenswrapper[4909]: I1126 07:19:03.858351 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Nov 26 07:19:07 crc kubenswrapper[4909]: I1126 07:19:07.301343 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:19:07 crc kubenswrapper[4909]: I1126 07:19:07.301673 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:19:08 crc kubenswrapper[4909]: I1126 07:19:08.567622 4909 generic.go:334] "Generic (PLEG): container finished" podID="5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" containerID="5e90b4e5ac6667f9daf762f2e6bc66cd6b609799cd175f03b8cc8c7f7b63151c" exitCode=0 Nov 26 07:19:08 crc kubenswrapper[4909]: I1126 07:19:08.567704 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdl8k" event={"ID":"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7","Type":"ContainerDied","Data":"5e90b4e5ac6667f9daf762f2e6bc66cd6b609799cd175f03b8cc8c7f7b63151c"} Nov 26 07:19:08 crc kubenswrapper[4909]: I1126 07:19:08.858977 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Nov 26 07:19:13 crc kubenswrapper[4909]: E1126 07:19:13.739501 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 26 07:19:13 crc kubenswrapper[4909]: E1126 07:19:13.739978 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pp2qf_openstack(94fb3d6d-c540-4c6d-af4d-257226561c47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 07:19:13 crc kubenswrapper[4909]: E1126 07:19:13.741213 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pp2qf" podUID="94fb3d6d-c540-4c6d-af4d-257226561c47" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.802549 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.815831 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.990661 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a920e179-3a15-4052-9f09-8e9264110499\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.990754 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-config-data\") pod \"312277e7-4018-4a0f-8374-c7f22a6e05f1\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.990787 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2svw8\" (UniqueName: \"kubernetes.io/projected/a920e179-3a15-4052-9f09-8e9264110499-kube-api-access-2svw8\") pod \"a920e179-3a15-4052-9f09-8e9264110499\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.990836 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-httpd-run\") pod \"312277e7-4018-4a0f-8374-c7f22a6e05f1\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.990869 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-scripts\") pod \"a920e179-3a15-4052-9f09-8e9264110499\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.990908 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-combined-ca-bundle\") pod \"a920e179-3a15-4052-9f09-8e9264110499\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.990981 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-logs\") pod \"312277e7-4018-4a0f-8374-c7f22a6e05f1\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991020 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-scripts\") pod \"312277e7-4018-4a0f-8374-c7f22a6e05f1\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991037 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"312277e7-4018-4a0f-8374-c7f22a6e05f1\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991056 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-httpd-run\") pod \"a920e179-3a15-4052-9f09-8e9264110499\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991078 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4wpb\" (UniqueName: \"kubernetes.io/projected/312277e7-4018-4a0f-8374-c7f22a6e05f1-kube-api-access-k4wpb\") pod \"312277e7-4018-4a0f-8374-c7f22a6e05f1\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991093 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-logs\") pod \"a920e179-3a15-4052-9f09-8e9264110499\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991129 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-combined-ca-bundle\") pod \"312277e7-4018-4a0f-8374-c7f22a6e05f1\" (UID: \"312277e7-4018-4a0f-8374-c7f22a6e05f1\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991183 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-config-data\") pod \"a920e179-3a15-4052-9f09-8e9264110499\" (UID: \"a920e179-3a15-4052-9f09-8e9264110499\") " Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991711 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-logs" (OuterVolumeSpecName: "logs") pod "312277e7-4018-4a0f-8374-c7f22a6e05f1" (UID: "312277e7-4018-4a0f-8374-c7f22a6e05f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991868 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a920e179-3a15-4052-9f09-8e9264110499" (UID: "a920e179-3a15-4052-9f09-8e9264110499"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.991989 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-logs" (OuterVolumeSpecName: "logs") pod "a920e179-3a15-4052-9f09-8e9264110499" (UID: "a920e179-3a15-4052-9f09-8e9264110499"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.992005 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "312277e7-4018-4a0f-8374-c7f22a6e05f1" (UID: "312277e7-4018-4a0f-8374-c7f22a6e05f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.998023 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "312277e7-4018-4a0f-8374-c7f22a6e05f1" (UID: "312277e7-4018-4a0f-8374-c7f22a6e05f1"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.998307 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-scripts" (OuterVolumeSpecName: "scripts") pod "312277e7-4018-4a0f-8374-c7f22a6e05f1" (UID: "312277e7-4018-4a0f-8374-c7f22a6e05f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:13 crc kubenswrapper[4909]: I1126 07:19:13.998881 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-scripts" (OuterVolumeSpecName: "scripts") pod "a920e179-3a15-4052-9f09-8e9264110499" (UID: "a920e179-3a15-4052-9f09-8e9264110499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.000419 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a920e179-3a15-4052-9f09-8e9264110499-kube-api-access-2svw8" (OuterVolumeSpecName: "kube-api-access-2svw8") pod "a920e179-3a15-4052-9f09-8e9264110499" (UID: "a920e179-3a15-4052-9f09-8e9264110499"). InnerVolumeSpecName "kube-api-access-2svw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.002380 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "a920e179-3a15-4052-9f09-8e9264110499" (UID: "a920e179-3a15-4052-9f09-8e9264110499"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.014848 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312277e7-4018-4a0f-8374-c7f22a6e05f1-kube-api-access-k4wpb" (OuterVolumeSpecName: "kube-api-access-k4wpb") pod "312277e7-4018-4a0f-8374-c7f22a6e05f1" (UID: "312277e7-4018-4a0f-8374-c7f22a6e05f1"). InnerVolumeSpecName "kube-api-access-k4wpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.027168 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a920e179-3a15-4052-9f09-8e9264110499" (UID: "a920e179-3a15-4052-9f09-8e9264110499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.028941 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "312277e7-4018-4a0f-8374-c7f22a6e05f1" (UID: "312277e7-4018-4a0f-8374-c7f22a6e05f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.045801 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-config-data" (OuterVolumeSpecName: "config-data") pod "312277e7-4018-4a0f-8374-c7f22a6e05f1" (UID: "312277e7-4018-4a0f-8374-c7f22a6e05f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.072114 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-config-data" (OuterVolumeSpecName: "config-data") pod "a920e179-3a15-4052-9f09-8e9264110499" (UID: "a920e179-3a15-4052-9f09-8e9264110499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092794 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092845 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092859 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092872 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2svw8\" (UniqueName: \"kubernetes.io/projected/a920e179-3a15-4052-9f09-8e9264110499-kube-api-access-2svw8\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092887 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092897 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092908 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a920e179-3a15-4052-9f09-8e9264110499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092919 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312277e7-4018-4a0f-8374-c7f22a6e05f1-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092930 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092940 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092959 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092972 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4wpb\" (UniqueName: \"kubernetes.io/projected/312277e7-4018-4a0f-8374-c7f22a6e05f1-kube-api-access-k4wpb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092983 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a920e179-3a15-4052-9f09-8e9264110499-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.092994 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312277e7-4018-4a0f-8374-c7f22a6e05f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.114281 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.125310 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.194570 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.194614 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.399303 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.406585 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.497968 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-svc\") pod \"fde2ec01-3a8c-4264-b307-7d7ac3682499\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498043 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-nb\") pod \"fde2ec01-3a8c-4264-b307-7d7ac3682499\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498143 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsgjz\" (UniqueName: \"kubernetes.io/projected/fde2ec01-3a8c-4264-b307-7d7ac3682499-kube-api-access-fsgjz\") pod \"fde2ec01-3a8c-4264-b307-7d7ac3682499\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498240 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s85kw\" (UniqueName: \"kubernetes.io/projected/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-kube-api-access-s85kw\") pod \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498275 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-config\") pod \"fde2ec01-3a8c-4264-b307-7d7ac3682499\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498319 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-combined-ca-bundle\") pod \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498372 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-sb\") pod \"fde2ec01-3a8c-4264-b307-7d7ac3682499\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498401 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-swift-storage-0\") pod \"fde2ec01-3a8c-4264-b307-7d7ac3682499\" (UID: \"fde2ec01-3a8c-4264-b307-7d7ac3682499\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.498422 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-config\") pod \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\" (UID: \"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7\") " Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.502523 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-kube-api-access-s85kw" (OuterVolumeSpecName: "kube-api-access-s85kw") pod "5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" (UID: "5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7"). InnerVolumeSpecName "kube-api-access-s85kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.504829 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde2ec01-3a8c-4264-b307-7d7ac3682499-kube-api-access-fsgjz" (OuterVolumeSpecName: "kube-api-access-fsgjz") pod "fde2ec01-3a8c-4264-b307-7d7ac3682499" (UID: "fde2ec01-3a8c-4264-b307-7d7ac3682499"). InnerVolumeSpecName "kube-api-access-fsgjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.526882 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" (UID: "5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.527449 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-config" (OuterVolumeSpecName: "config") pod "5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" (UID: "5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.545624 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fde2ec01-3a8c-4264-b307-7d7ac3682499" (UID: "fde2ec01-3a8c-4264-b307-7d7ac3682499"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.552816 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fde2ec01-3a8c-4264-b307-7d7ac3682499" (UID: "fde2ec01-3a8c-4264-b307-7d7ac3682499"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.555659 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-config" (OuterVolumeSpecName: "config") pod "fde2ec01-3a8c-4264-b307-7d7ac3682499" (UID: "fde2ec01-3a8c-4264-b307-7d7ac3682499"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.557490 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fde2ec01-3a8c-4264-b307-7d7ac3682499" (UID: "fde2ec01-3a8c-4264-b307-7d7ac3682499"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.575323 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fde2ec01-3a8c-4264-b307-7d7ac3682499" (UID: "fde2ec01-3a8c-4264-b307-7d7ac3682499"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600541 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsgjz\" (UniqueName: \"kubernetes.io/projected/fde2ec01-3a8c-4264-b307-7d7ac3682499-kube-api-access-fsgjz\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600582 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s85kw\" (UniqueName: \"kubernetes.io/projected/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-kube-api-access-s85kw\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600616 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600631 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600645 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600655 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600667 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600678 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.600690 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fde2ec01-3a8c-4264-b307-7d7ac3682499-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.620915 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a920e179-3a15-4052-9f09-8e9264110499","Type":"ContainerDied","Data":"c6a296e4fe1a98cb5e34e25847d6ebcda0c0a6b535c91359dd3f3124e0207ab0"} Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.620952 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.620963 4909 scope.go:117] "RemoveContainer" containerID="906c19152db404a652ac530adf1594ba8acb23ff35d0f1fc1212ea8033d5b279" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.624542 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"312277e7-4018-4a0f-8374-c7f22a6e05f1","Type":"ContainerDied","Data":"142ce54854b3361482a2cc2ca5a024e52b0a9f2e4106947431b35158a256b93c"} Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.624751 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.629683 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" event={"ID":"fde2ec01-3a8c-4264-b307-7d7ac3682499","Type":"ContainerDied","Data":"383758dc6cd0be74049c055a459cfdd4286c2749437b5f85e4e54d0b7153cf3b"} Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.629712 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.632368 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdl8k" event={"ID":"5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7","Type":"ContainerDied","Data":"bd68ba37677621e28343b3121d374cf4ef77fc7fd6e8738545345d5b5489244c"} Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.632402 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdl8k" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.632409 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd68ba37677621e28343b3121d374cf4ef77fc7fd6e8738545345d5b5489244c" Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.635328 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-pp2qf" podUID="94fb3d6d-c540-4c6d-af4d-257226561c47" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.648217 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.665184 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.682760 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.683203 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-httpd" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683228 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-httpd" Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.683247 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-log" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683255 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-log" Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.683272 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="init" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683280 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="init" Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.683307 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-httpd" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683315 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-httpd" Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.683328 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="dnsmasq-dns" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683335 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="dnsmasq-dns" Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.683352 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-log" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683359 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-log" Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.683372 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" containerName="neutron-db-sync" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683380 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" containerName="neutron-db-sync" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683559 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" containerName="neutron-db-sync" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683577 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-log" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683609 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="dnsmasq-dns" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683625 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-log" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683643 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a920e179-3a15-4052-9f09-8e9264110499" containerName="glance-httpd" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.683663 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" containerName="glance-httpd" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.686501 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.694269 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.703778 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mv4fs" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.704246 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.704806 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.705034 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.710293 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.721018 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: E1126 07:19:14.732266 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312277e7_4018_4a0f_8374_c7f22a6e05f1.slice/crio-142ce54854b3361482a2cc2ca5a024e52b0a9f2e4106947431b35158a256b93c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312277e7_4018_4a0f_8374_c7f22a6e05f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda920e179_3a15_4052_9f09_8e9264110499.slice\": RecentStats: unable to find data in memory cache]" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.732879 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.734920 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.739518 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.742686 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.766865 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.787978 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-zsqsd"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.797240 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-zsqsd"] Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.821791 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj692\" (UniqueName: \"kubernetes.io/projected/56b43716-a21b-439a-ab99-835173e5d8bc-kube-api-access-nj692\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.822278 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.822373 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.822411 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.822474 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-logs\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.822531 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.822579 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.822630 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924541 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924613 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924652 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924689 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924712 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924737 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj692\" (UniqueName: \"kubernetes.io/projected/56b43716-a21b-439a-ab99-835173e5d8bc-kube-api-access-nj692\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924778 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmt7\" (UniqueName: \"kubernetes.io/projected/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-kube-api-access-xlmt7\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924820 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924876 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924908 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924927 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-logs\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924959 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.924989 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-logs\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.925012 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.925040 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.925056 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.925069 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.925900 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.925937 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-logs\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.929470 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.929707 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.931839 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.933684 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.941630 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj692\" (UniqueName: \"kubernetes.io/projected/56b43716-a21b-439a-ab99-835173e5d8bc-kube-api-access-nj692\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:14 crc kubenswrapper[4909]: I1126 07:19:14.958024 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " pod="openstack/glance-default-external-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.029793 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmt7\" (UniqueName: \"kubernetes.io/projected/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-kube-api-access-xlmt7\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.029912 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.029938 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-logs\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.029978 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.030005 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.030055 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.030089 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.030110 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.030712 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.034710 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.034722 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.035025 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-logs\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.038998 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.039249 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.049948 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.051931 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmt7\" (UniqueName: \"kubernetes.io/projected/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-kube-api-access-xlmt7\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.056870 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.079507 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.099096 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.118440 4909 scope.go:117] "RemoveContainer" containerID="93474ad5c7d49c7d192de04aca05929f6481ee988f81b8b1d50cd54f7f05d84a" Nov 26 07:19:15 crc kubenswrapper[4909]: E1126 07:19:15.124383 4909 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Nov 26 07:19:15 crc kubenswrapper[4909]: E1126 07:19:15.126343 4909 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx88q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e14d933d-2d7e-43cf-a99d-d03035d13522): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 07:19:15 crc kubenswrapper[4909]: E1126 07:19:15.128913 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.156008 4909 scope.go:117] "RemoveContainer" containerID="3d3e1a9e82b8e1afe632eff02f50ae8a31ac118df04f1881c291edd65a09e51f" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.178862 4909 scope.go:117] "RemoveContainer" containerID="8b7232feea8f70e385c3485ca1607c7a2b6ba407f57eec36464284590c50f91a" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.235633 4909 scope.go:117] "RemoveContainer" containerID="eb4fe9e36f8c23ce2c0b49c604d2e948bcfd7a88308bfd14ad79330f54d340af" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.300460 4909 scope.go:117] "RemoveContainer" containerID="5e4852f7d478111fe1c268c57eec9fda98fe1fdbf18d00cc9feb59a94e1bec07" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.642201 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-tmqwf"] Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.646620 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.669836 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-tmqwf"] Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.678751 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kfqlk" event={"ID":"a4bc32bf-2659-4f99-bb10-8ac0617b317c","Type":"ContainerStarted","Data":"b16d29712c871d2500018cadeedd75995eef5d750f3d147370f67dfe52f6384f"} Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.685764 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-central-agent" containerID="cri-o://4a11fc09105962dfc587407b829c9ca5151ed17190d6a3a5284c96ef34e8d3fe" gracePeriod=30 Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.686808 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="sg-core" containerID="cri-o://d4f59cbade1571411da46a6ceab0e818bcec0284f7e08b87511ab66230981717" gracePeriod=30 Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.686853 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-notification-agent" containerID="cri-o://df939f2bae7228f4faafee702ecf65f5a69ff7d889fcea6b2068289dd6b3f8fe" gracePeriod=30 Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.701061 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kfqlk" podStartSLOduration=2.7777154509999997 podStartE2EDuration="25.701042624s" podCreationTimestamp="2025-11-26 07:18:50 +0000 UTC" firstStartedPulling="2025-11-26 07:18:52.178901112 +0000 UTC m=+1104.325112278" lastFinishedPulling="2025-11-26 07:19:15.102228285 +0000 UTC m=+1127.248439451" observedRunningTime="2025-11-26 07:19:15.700582102 +0000 UTC m=+1127.846793258" watchObservedRunningTime="2025-11-26 07:19:15.701042624 +0000 UTC m=+1127.847253790" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.746465 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.746505 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.746545 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.746565 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-config\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.746581 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.746624 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87h7q\" (UniqueName: \"kubernetes.io/projected/8795e117-1d7a-44d9-bd86-93fca918ec0e-kube-api-access-87h7q\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.764880 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d655886f6-h56wz"] Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.766429 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.783046 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pn9zm" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.783330 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.783545 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.783989 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.809116 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d655886f6-h56wz"] Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.820779 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.847684 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfd2\" (UniqueName: \"kubernetes.io/projected/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-kube-api-access-lwfd2\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.850790 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851045 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851146 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-config\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851277 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851377 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-ovndb-tls-certs\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851447 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-config\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851606 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851724 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87h7q\" (UniqueName: \"kubernetes.io/projected/8795e117-1d7a-44d9-bd86-93fca918ec0e-kube-api-access-87h7q\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851878 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-combined-ca-bundle\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851946 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-httpd-config\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.852330 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.852537 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.851720 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.853673 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-config\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.853894 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.854632 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.876712 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87h7q\" (UniqueName: \"kubernetes.io/projected/8795e117-1d7a-44d9-bd86-93fca918ec0e-kube-api-access-87h7q\") pod \"dnsmasq-dns-84b966f6c9-tmqwf\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.953762 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfd2\" (UniqueName: \"kubernetes.io/projected/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-kube-api-access-lwfd2\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.953831 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-config\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.953870 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-ovndb-tls-certs\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.953926 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-combined-ca-bundle\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.953942 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-httpd-config\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.958360 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-httpd-config\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.959135 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-combined-ca-bundle\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.959403 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-config\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.960458 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-ovndb-tls-certs\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.968984 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfd2\" (UniqueName: \"kubernetes.io/projected/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-kube-api-access-lwfd2\") pod \"neutron-5d655886f6-h56wz\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:15 crc kubenswrapper[4909]: I1126 07:19:15.976974 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.146638 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.485841 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-tmqwf"] Nov 26 07:19:16 crc kubenswrapper[4909]: W1126 07:19:16.499702 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8795e117_1d7a_44d9_bd86_93fca918ec0e.slice/crio-c48d1731780fa153e7586fd972b8601ba5a571bdcbfe0d62bcf3dbde103f8ae0 WatchSource:0}: Error finding container c48d1731780fa153e7586fd972b8601ba5a571bdcbfe0d62bcf3dbde103f8ae0: Status 404 returned error can't find the container with id c48d1731780fa153e7586fd972b8601ba5a571bdcbfe0d62bcf3dbde103f8ae0 Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.515744 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312277e7-4018-4a0f-8374-c7f22a6e05f1" path="/var/lib/kubelet/pods/312277e7-4018-4a0f-8374-c7f22a6e05f1/volumes" Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.520544 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a920e179-3a15-4052-9f09-8e9264110499" path="/var/lib/kubelet/pods/a920e179-3a15-4052-9f09-8e9264110499/volumes" Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.521241 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" path="/var/lib/kubelet/pods/fde2ec01-3a8c-4264-b307-7d7ac3682499/volumes" Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.582350 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.763487 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14","Type":"ContainerStarted","Data":"15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9"} Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.763761 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14","Type":"ContainerStarted","Data":"973e7273041589fa4f53d226643d93658fe0bcd3729c3228443fdcc42e8192d3"} Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.794826 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b43716-a21b-439a-ab99-835173e5d8bc","Type":"ContainerStarted","Data":"7b5ccfe9ae36637c50a8d35cef08c83468a7c9c281bc26f62a66a85db369bc90"} Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.794877 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b43716-a21b-439a-ab99-835173e5d8bc","Type":"ContainerStarted","Data":"3e60e06149539ab7039cac6ee8dde8b70c1ef81090ac3e0a9b1c3738326d395f"} Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.797713 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d655886f6-h56wz"] Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.812262 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" event={"ID":"8795e117-1d7a-44d9-bd86-93fca918ec0e","Type":"ContainerStarted","Data":"c48d1731780fa153e7586fd972b8601ba5a571bdcbfe0d62bcf3dbde103f8ae0"} Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.866048 4909 generic.go:334] "Generic (PLEG): container finished" podID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerID="d4f59cbade1571411da46a6ceab0e818bcec0284f7e08b87511ab66230981717" exitCode=2 Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.866086 4909 generic.go:334] "Generic (PLEG): container finished" podID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerID="4a11fc09105962dfc587407b829c9ca5151ed17190d6a3a5284c96ef34e8d3fe" exitCode=0 Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.866889 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerDied","Data":"d4f59cbade1571411da46a6ceab0e818bcec0284f7e08b87511ab66230981717"} Nov 26 07:19:16 crc kubenswrapper[4909]: I1126 07:19:16.866926 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerDied","Data":"4a11fc09105962dfc587407b829c9ca5151ed17190d6a3a5284c96ef34e8d3fe"} Nov 26 07:19:16 crc kubenswrapper[4909]: W1126 07:19:16.879949 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7aa7dca9_3bc0_4869_b69a_f2bbf2190038.slice/crio-827770e82efc17369e95d0b37fe4a53335d9a70b269490faae753f15d2fdf8ba WatchSource:0}: Error finding container 827770e82efc17369e95d0b37fe4a53335d9a70b269490faae753f15d2fdf8ba: Status 404 returned error can't find the container with id 827770e82efc17369e95d0b37fe4a53335d9a70b269490faae753f15d2fdf8ba Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.877119 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14","Type":"ContainerStarted","Data":"9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220"} Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.879980 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b43716-a21b-439a-ab99-835173e5d8bc","Type":"ContainerStarted","Data":"a169304f4fdb7a1da8086638385028b6d2efac6ea2de938d901c37a0fadfd111"} Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.882321 4909 generic.go:334] "Generic (PLEG): container finished" podID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerID="bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49" exitCode=0 Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.882401 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" event={"ID":"8795e117-1d7a-44d9-bd86-93fca918ec0e","Type":"ContainerDied","Data":"bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49"} Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.884456 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d655886f6-h56wz" event={"ID":"7aa7dca9-3bc0-4869-b69a-f2bbf2190038","Type":"ContainerStarted","Data":"6ce5bc27dbcd8bc437bbc74ad6462b2ac8d4570a131a3d43b4e39c235a6f2b13"} Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.884512 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d655886f6-h56wz" event={"ID":"7aa7dca9-3bc0-4869-b69a-f2bbf2190038","Type":"ContainerStarted","Data":"f830e018627073977f605e520fbf64ada9095f6bb653e33ef0ca390f3eb5fabe"} Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.884533 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d655886f6-h56wz" event={"ID":"7aa7dca9-3bc0-4869-b69a-f2bbf2190038","Type":"ContainerStarted","Data":"827770e82efc17369e95d0b37fe4a53335d9a70b269490faae753f15d2fdf8ba"} Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.885337 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.905099 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.905075279 podStartE2EDuration="3.905075279s" podCreationTimestamp="2025-11-26 07:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:17.897672226 +0000 UTC m=+1130.043883392" watchObservedRunningTime="2025-11-26 07:19:17.905075279 +0000 UTC m=+1130.051286445" Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.972652 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d655886f6-h56wz" podStartSLOduration=2.972628134 podStartE2EDuration="2.972628134s" podCreationTimestamp="2025-11-26 07:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:17.949110441 +0000 UTC m=+1130.095321607" watchObservedRunningTime="2025-11-26 07:19:17.972628134 +0000 UTC m=+1130.118839300" Nov 26 07:19:17 crc kubenswrapper[4909]: I1126 07:19:17.981462 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.9814418050000002 podStartE2EDuration="3.981441805s" podCreationTimestamp="2025-11-26 07:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:17.970480405 +0000 UTC m=+1130.116691591" watchObservedRunningTime="2025-11-26 07:19:17.981441805 +0000 UTC m=+1130.127652971" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.422574 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74f9bb65df-qpbtq"] Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.424374 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.428418 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.428673 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.438345 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74f9bb65df-qpbtq"] Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.506336 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-internal-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.506464 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-combined-ca-bundle\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.506490 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpg4x\" (UniqueName: \"kubernetes.io/projected/978782ca-c440-4bb1-9516-30115aa4a0b2-kube-api-access-qpg4x\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.506521 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-config\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.506537 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-httpd-config\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.506603 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-ovndb-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.506803 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-public-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.608416 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-public-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.608668 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-internal-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.608827 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-combined-ca-bundle\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.609306 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpg4x\" (UniqueName: \"kubernetes.io/projected/978782ca-c440-4bb1-9516-30115aa4a0b2-kube-api-access-qpg4x\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.609697 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-config\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.610058 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-httpd-config\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.610431 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-ovndb-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.613209 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-combined-ca-bundle\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.613998 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-internal-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.614504 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-config\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.614703 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-ovndb-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.625309 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-httpd-config\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.625356 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-public-tls-certs\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.631906 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpg4x\" (UniqueName: \"kubernetes.io/projected/978782ca-c440-4bb1-9516-30115aa4a0b2-kube-api-access-qpg4x\") pod \"neutron-74f9bb65df-qpbtq\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.755670 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.858299 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-zsqsd" podUID="fde2ec01-3a8c-4264-b307-7d7ac3682499" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: i/o timeout" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.895448 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" event={"ID":"8795e117-1d7a-44d9-bd86-93fca918ec0e","Type":"ContainerStarted","Data":"c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963"} Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.895611 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.897304 4909 generic.go:334] "Generic (PLEG): container finished" podID="a4bc32bf-2659-4f99-bb10-8ac0617b317c" containerID="b16d29712c871d2500018cadeedd75995eef5d750f3d147370f67dfe52f6384f" exitCode=0 Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.897499 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kfqlk" event={"ID":"a4bc32bf-2659-4f99-bb10-8ac0617b317c","Type":"ContainerDied","Data":"b16d29712c871d2500018cadeedd75995eef5d750f3d147370f67dfe52f6384f"} Nov 26 07:19:18 crc kubenswrapper[4909]: I1126 07:19:18.919019 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" podStartSLOduration=3.918997959 podStartE2EDuration="3.918997959s" podCreationTimestamp="2025-11-26 07:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:18.910801515 +0000 UTC m=+1131.057012701" watchObservedRunningTime="2025-11-26 07:19:18.918997959 +0000 UTC m=+1131.065209135" Nov 26 07:19:19 crc kubenswrapper[4909]: I1126 07:19:19.317418 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74f9bb65df-qpbtq"] Nov 26 07:19:19 crc kubenswrapper[4909]: W1126 07:19:19.321365 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod978782ca_c440_4bb1_9516_30115aa4a0b2.slice/crio-3a6a15c713f0cc34415fc9c44e4afdcfdc34a64cb3eef097b53ad3abb65f18e1 WatchSource:0}: Error finding container 3a6a15c713f0cc34415fc9c44e4afdcfdc34a64cb3eef097b53ad3abb65f18e1: Status 404 returned error can't find the container with id 3a6a15c713f0cc34415fc9c44e4afdcfdc34a64cb3eef097b53ad3abb65f18e1 Nov 26 07:19:19 crc kubenswrapper[4909]: I1126 07:19:19.907339 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74f9bb65df-qpbtq" event={"ID":"978782ca-c440-4bb1-9516-30115aa4a0b2","Type":"ContainerStarted","Data":"76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876"} Nov 26 07:19:19 crc kubenswrapper[4909]: I1126 07:19:19.907644 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74f9bb65df-qpbtq" event={"ID":"978782ca-c440-4bb1-9516-30115aa4a0b2","Type":"ContainerStarted","Data":"0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa"} Nov 26 07:19:19 crc kubenswrapper[4909]: I1126 07:19:19.907656 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74f9bb65df-qpbtq" event={"ID":"978782ca-c440-4bb1-9516-30115aa4a0b2","Type":"ContainerStarted","Data":"3a6a15c713f0cc34415fc9c44e4afdcfdc34a64cb3eef097b53ad3abb65f18e1"} Nov 26 07:19:19 crc kubenswrapper[4909]: I1126 07:19:19.927734 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74f9bb65df-qpbtq" podStartSLOduration=1.9277084169999998 podStartE2EDuration="1.927708417s" podCreationTimestamp="2025-11-26 07:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:19.925306841 +0000 UTC m=+1132.071518017" watchObservedRunningTime="2025-11-26 07:19:19.927708417 +0000 UTC m=+1132.073919583" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.294816 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.341847 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmv2c\" (UniqueName: \"kubernetes.io/projected/a4bc32bf-2659-4f99-bb10-8ac0617b317c-kube-api-access-qmv2c\") pod \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.341975 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-combined-ca-bundle\") pod \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.342083 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-db-sync-config-data\") pod \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\" (UID: \"a4bc32bf-2659-4f99-bb10-8ac0617b317c\") " Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.354859 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a4bc32bf-2659-4f99-bb10-8ac0617b317c" (UID: "a4bc32bf-2659-4f99-bb10-8ac0617b317c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.354899 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4bc32bf-2659-4f99-bb10-8ac0617b317c-kube-api-access-qmv2c" (OuterVolumeSpecName: "kube-api-access-qmv2c") pod "a4bc32bf-2659-4f99-bb10-8ac0617b317c" (UID: "a4bc32bf-2659-4f99-bb10-8ac0617b317c"). InnerVolumeSpecName "kube-api-access-qmv2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.369665 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4bc32bf-2659-4f99-bb10-8ac0617b317c" (UID: "a4bc32bf-2659-4f99-bb10-8ac0617b317c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.444071 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.444101 4909 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4bc32bf-2659-4f99-bb10-8ac0617b317c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.444135 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmv2c\" (UniqueName: \"kubernetes.io/projected/a4bc32bf-2659-4f99-bb10-8ac0617b317c-kube-api-access-qmv2c\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.922614 4909 generic.go:334] "Generic (PLEG): container finished" podID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerID="df939f2bae7228f4faafee702ecf65f5a69ff7d889fcea6b2068289dd6b3f8fe" exitCode=0 Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.922708 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerDied","Data":"df939f2bae7228f4faafee702ecf65f5a69ff7d889fcea6b2068289dd6b3f8fe"} Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.924398 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kfqlk" event={"ID":"a4bc32bf-2659-4f99-bb10-8ac0617b317c","Type":"ContainerDied","Data":"d790075b4931de205dcac4b3be146ff496cb6953a241dc4179cbc41f0a95a59b"} Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.924438 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d790075b4931de205dcac4b3be146ff496cb6953a241dc4179cbc41f0a95a59b" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.924441 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kfqlk" Nov 26 07:19:20 crc kubenswrapper[4909]: I1126 07:19:20.924558 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.231352 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6489c4db99-sc69l"] Nov 26 07:19:21 crc kubenswrapper[4909]: E1126 07:19:21.231789 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4bc32bf-2659-4f99-bb10-8ac0617b317c" containerName="barbican-db-sync" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.231800 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4bc32bf-2659-4f99-bb10-8ac0617b317c" containerName="barbican-db-sync" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.231976 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4bc32bf-2659-4f99-bb10-8ac0617b317c" containerName="barbican-db-sync" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.232895 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.240229 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6489c4db99-sc69l"] Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.240895 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.241572 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9b2v6" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.241802 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.299065 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-85d774bbbb-slpbz"] Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.315683 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.323269 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85d774bbbb-slpbz"] Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.323848 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.350391 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-tmqwf"] Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.350622 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" podUID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerName="dnsmasq-dns" containerID="cri-o://c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963" gracePeriod=10 Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.378917 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv7ck\" (UniqueName: \"kubernetes.io/projected/7382debb-3dc4-4849-9109-5d415c6a196f-kube-api-access-wv7ck\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.378988 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c0347-3494-4451-83b3-9919dd346f19-logs\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379011 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379027 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data-custom\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379051 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-combined-ca-bundle\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379084 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7382debb-3dc4-4849-9109-5d415c6a196f-logs\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379122 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-combined-ca-bundle\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379140 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qbb\" (UniqueName: \"kubernetes.io/projected/d79c0347-3494-4451-83b3-9919dd346f19-kube-api-access-l9qbb\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379166 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.379203 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data-custom\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.429045 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-wth9q"] Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.430687 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.433557 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.445390 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-wth9q"] Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.481228 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-combined-ca-bundle\") pod \"e14d933d-2d7e-43cf-a99d-d03035d13522\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.481319 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-scripts\") pod \"e14d933d-2d7e-43cf-a99d-d03035d13522\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.481558 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx88q\" (UniqueName: \"kubernetes.io/projected/e14d933d-2d7e-43cf-a99d-d03035d13522-kube-api-access-qx88q\") pod \"e14d933d-2d7e-43cf-a99d-d03035d13522\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.481618 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-config-data\") pod \"e14d933d-2d7e-43cf-a99d-d03035d13522\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.481674 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-run-httpd\") pod \"e14d933d-2d7e-43cf-a99d-d03035d13522\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.481711 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-sg-core-conf-yaml\") pod \"e14d933d-2d7e-43cf-a99d-d03035d13522\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.481760 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-log-httpd\") pod \"e14d933d-2d7e-43cf-a99d-d03035d13522\" (UID: \"e14d933d-2d7e-43cf-a99d-d03035d13522\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482241 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-combined-ca-bundle\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482269 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qbb\" (UniqueName: \"kubernetes.io/projected/d79c0347-3494-4451-83b3-9919dd346f19-kube-api-access-l9qbb\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482298 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482350 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482400 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482429 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data-custom\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482455 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv7ck\" (UniqueName: \"kubernetes.io/projected/7382debb-3dc4-4849-9109-5d415c6a196f-kube-api-access-wv7ck\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482501 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482530 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6hr\" (UniqueName: \"kubernetes.io/projected/3ecb4448-9622-4e65-bcf6-85ed4c003817-kube-api-access-7k6hr\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482583 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c0347-3494-4451-83b3-9919dd346f19-logs\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482627 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482645 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data-custom\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482708 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-combined-ca-bundle\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482746 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482785 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-config\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.482810 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7382debb-3dc4-4849-9109-5d415c6a196f-logs\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.483730 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7382debb-3dc4-4849-9109-5d415c6a196f-logs\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.487667 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c0347-3494-4451-83b3-9919dd346f19-logs\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.495938 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-scripts" (OuterVolumeSpecName: "scripts") pod "e14d933d-2d7e-43cf-a99d-d03035d13522" (UID: "e14d933d-2d7e-43cf-a99d-d03035d13522"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.507751 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data-custom\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.507973 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e14d933d-2d7e-43cf-a99d-d03035d13522" (UID: "e14d933d-2d7e-43cf-a99d-d03035d13522"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.508149 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e14d933d-2d7e-43cf-a99d-d03035d13522" (UID: "e14d933d-2d7e-43cf-a99d-d03035d13522"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.509919 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14d933d-2d7e-43cf-a99d-d03035d13522-kube-api-access-qx88q" (OuterVolumeSpecName: "kube-api-access-qx88q") pod "e14d933d-2d7e-43cf-a99d-d03035d13522" (UID: "e14d933d-2d7e-43cf-a99d-d03035d13522"). InnerVolumeSpecName "kube-api-access-qx88q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.520664 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-combined-ca-bundle\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.524881 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.527374 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.528192 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-combined-ca-bundle\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.528692 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data-custom\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.529253 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qbb\" (UniqueName: \"kubernetes.io/projected/d79c0347-3494-4451-83b3-9919dd346f19-kube-api-access-l9qbb\") pod \"barbican-worker-6489c4db99-sc69l\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.530739 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b88557586-pqm94"] Nov 26 07:19:21 crc kubenswrapper[4909]: E1126 07:19:21.531133 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-central-agent" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.531146 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-central-agent" Nov 26 07:19:21 crc kubenswrapper[4909]: E1126 07:19:21.531161 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="sg-core" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.531167 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="sg-core" Nov 26 07:19:21 crc kubenswrapper[4909]: E1126 07:19:21.531185 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-notification-agent" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.531192 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-notification-agent" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.531342 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="sg-core" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.531366 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-central-agent" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.531429 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" containerName="ceilometer-notification-agent" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.532932 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.535177 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.536915 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv7ck\" (UniqueName: \"kubernetes.io/projected/7382debb-3dc4-4849-9109-5d415c6a196f-kube-api-access-wv7ck\") pod \"barbican-keystone-listener-85d774bbbb-slpbz\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.551614 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b88557586-pqm94"] Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.567281 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e14d933d-2d7e-43cf-a99d-d03035d13522" (UID: "e14d933d-2d7e-43cf-a99d-d03035d13522"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.573387 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.585846 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data-custom\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.585902 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.585927 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-combined-ca-bundle\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.585949 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-config\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.585974 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rsl\" (UniqueName: \"kubernetes.io/projected/c7e9a004-6dc5-44f0-9429-cca23187791b-kube-api-access-w2rsl\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586021 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e9a004-6dc5-44f0-9429-cca23187791b-logs\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586079 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586118 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586162 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586209 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k6hr\" (UniqueName: \"kubernetes.io/projected/3ecb4448-9622-4e65-bcf6-85ed4c003817-kube-api-access-7k6hr\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586258 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586335 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586350 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx88q\" (UniqueName: \"kubernetes.io/projected/e14d933d-2d7e-43cf-a99d-d03035d13522-kube-api-access-qx88q\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586365 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586376 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.586387 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14d933d-2d7e-43cf-a99d-d03035d13522-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.590632 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.590578 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.590577 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.595838 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-config\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.596221 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.608885 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e14d933d-2d7e-43cf-a99d-d03035d13522" (UID: "e14d933d-2d7e-43cf-a99d-d03035d13522"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.609638 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k6hr\" (UniqueName: \"kubernetes.io/projected/3ecb4448-9622-4e65-bcf6-85ed4c003817-kube-api-access-7k6hr\") pod \"dnsmasq-dns-75c8ddd69c-wth9q\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.613257 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-config-data" (OuterVolumeSpecName: "config-data") pod "e14d933d-2d7e-43cf-a99d-d03035d13522" (UID: "e14d933d-2d7e-43cf-a99d-d03035d13522"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.638428 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.690941 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data-custom\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.691242 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-combined-ca-bundle\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.691289 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rsl\" (UniqueName: \"kubernetes.io/projected/c7e9a004-6dc5-44f0-9429-cca23187791b-kube-api-access-w2rsl\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.691338 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e9a004-6dc5-44f0-9429-cca23187791b-logs\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.691497 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.691581 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.691613 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d933d-2d7e-43cf-a99d-d03035d13522-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.692323 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e9a004-6dc5-44f0-9429-cca23187791b-logs\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.695098 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data-custom\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.697168 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-combined-ca-bundle\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.701999 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.713149 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rsl\" (UniqueName: \"kubernetes.io/projected/c7e9a004-6dc5-44f0-9429-cca23187791b-kube-api-access-w2rsl\") pod \"barbican-api-7b88557586-pqm94\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.752248 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.870168 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.896468 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-config\") pod \"8795e117-1d7a-44d9-bd86-93fca918ec0e\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.896753 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-swift-storage-0\") pod \"8795e117-1d7a-44d9-bd86-93fca918ec0e\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.896844 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-svc\") pod \"8795e117-1d7a-44d9-bd86-93fca918ec0e\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.896882 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-nb\") pod \"8795e117-1d7a-44d9-bd86-93fca918ec0e\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.896940 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-sb\") pod \"8795e117-1d7a-44d9-bd86-93fca918ec0e\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.896999 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87h7q\" (UniqueName: \"kubernetes.io/projected/8795e117-1d7a-44d9-bd86-93fca918ec0e-kube-api-access-87h7q\") pod \"8795e117-1d7a-44d9-bd86-93fca918ec0e\" (UID: \"8795e117-1d7a-44d9-bd86-93fca918ec0e\") " Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.904776 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8795e117-1d7a-44d9-bd86-93fca918ec0e-kube-api-access-87h7q" (OuterVolumeSpecName: "kube-api-access-87h7q") pod "8795e117-1d7a-44d9-bd86-93fca918ec0e" (UID: "8795e117-1d7a-44d9-bd86-93fca918ec0e"). InnerVolumeSpecName "kube-api-access-87h7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.949454 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.954054 4909 generic.go:334] "Generic (PLEG): container finished" podID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerID="c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963" exitCode=0 Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.954086 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.954164 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" event={"ID":"8795e117-1d7a-44d9-bd86-93fca918ec0e","Type":"ContainerDied","Data":"c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963"} Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.954192 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-tmqwf" event={"ID":"8795e117-1d7a-44d9-bd86-93fca918ec0e","Type":"ContainerDied","Data":"c48d1731780fa153e7586fd972b8601ba5a571bdcbfe0d62bcf3dbde103f8ae0"} Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.954212 4909 scope.go:117] "RemoveContainer" containerID="c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.968905 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.971885 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14d933d-2d7e-43cf-a99d-d03035d13522","Type":"ContainerDied","Data":"542fc56bd9296847cf0b6bdfbb5570ba82aba0b47f58d9df198ac1d096889016"} Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.974397 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8795e117-1d7a-44d9-bd86-93fca918ec0e" (UID: "8795e117-1d7a-44d9-bd86-93fca918ec0e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.993787 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8795e117-1d7a-44d9-bd86-93fca918ec0e" (UID: "8795e117-1d7a-44d9-bd86-93fca918ec0e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.996041 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-config" (OuterVolumeSpecName: "config") pod "8795e117-1d7a-44d9-bd86-93fca918ec0e" (UID: "8795e117-1d7a-44d9-bd86-93fca918ec0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.998267 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.998295 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.998309 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:21 crc kubenswrapper[4909]: I1126 07:19:21.998322 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87h7q\" (UniqueName: \"kubernetes.io/projected/8795e117-1d7a-44d9-bd86-93fca918ec0e-kube-api-access-87h7q\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.041279 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8795e117-1d7a-44d9-bd86-93fca918ec0e" (UID: "8795e117-1d7a-44d9-bd86-93fca918ec0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.041364 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8795e117-1d7a-44d9-bd86-93fca918ec0e" (UID: "8795e117-1d7a-44d9-bd86-93fca918ec0e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.101745 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.102048 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8795e117-1d7a-44d9-bd86-93fca918ec0e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.132357 4909 scope.go:117] "RemoveContainer" containerID="bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.136497 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6489c4db99-sc69l"] Nov 26 07:19:22 crc kubenswrapper[4909]: W1126 07:19:22.185395 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd79c0347_3494_4451_83b3_9919dd346f19.slice/crio-bb68cdcebf6c3e373dd1c0efde12a52465d23642dcdbe936042b900c1232fa2c WatchSource:0}: Error finding container bb68cdcebf6c3e373dd1c0efde12a52465d23642dcdbe936042b900c1232fa2c: Status 404 returned error can't find the container with id bb68cdcebf6c3e373dd1c0efde12a52465d23642dcdbe936042b900c1232fa2c Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.185671 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.198488 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.204513 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:22 crc kubenswrapper[4909]: E1126 07:19:22.204970 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerName="dnsmasq-dns" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.204984 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerName="dnsmasq-dns" Nov 26 07:19:22 crc kubenswrapper[4909]: E1126 07:19:22.205005 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerName="init" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.205012 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerName="init" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.205182 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8795e117-1d7a-44d9-bd86-93fca918ec0e" containerName="dnsmasq-dns" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.206848 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.209826 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.212442 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.214893 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.227897 4909 scope.go:117] "RemoveContainer" containerID="c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963" Nov 26 07:19:22 crc kubenswrapper[4909]: E1126 07:19:22.228429 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963\": container with ID starting with c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963 not found: ID does not exist" containerID="c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.228467 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963"} err="failed to get container status \"c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963\": rpc error: code = NotFound desc = could not find container \"c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963\": container with ID starting with c05ad73704665e7ac439350b9f03f0467d5b0463fa08f56dfa680c443e4d5963 not found: ID does not exist" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.228494 4909 scope.go:117] "RemoveContainer" containerID="bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49" Nov 26 07:19:22 crc kubenswrapper[4909]: E1126 07:19:22.233432 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49\": container with ID starting with bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49 not found: ID does not exist" containerID="bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.233480 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49"} err="failed to get container status \"bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49\": rpc error: code = NotFound desc = could not find container \"bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49\": container with ID starting with bd173df9de33bb1cf750190e87e39061b115d7f874e86cd0737bbd83152f4d49 not found: ID does not exist" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.233511 4909 scope.go:117] "RemoveContainer" containerID="d4f59cbade1571411da46a6ceab0e818bcec0284f7e08b87511ab66230981717" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.266444 4909 scope.go:117] "RemoveContainer" containerID="df939f2bae7228f4faafee702ecf65f5a69ff7d889fcea6b2068289dd6b3f8fe" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.300151 4909 scope.go:117] "RemoveContainer" containerID="4a11fc09105962dfc587407b829c9ca5151ed17190d6a3a5284c96ef34e8d3fe" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.317056 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-wth9q"] Nov 26 07:19:22 crc kubenswrapper[4909]: W1126 07:19:22.329738 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7382debb_3dc4_4849_9109_5d415c6a196f.slice/crio-ad06187f176d483a88b61c58bef46d417ba51abeafb0a81f785e58cf8a7f3e9a WatchSource:0}: Error finding container ad06187f176d483a88b61c58bef46d417ba51abeafb0a81f785e58cf8a7f3e9a: Status 404 returned error can't find the container with id ad06187f176d483a88b61c58bef46d417ba51abeafb0a81f785e58cf8a7f3e9a Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.344880 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b88557586-pqm94"] Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.350493 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-tmqwf"] Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.356583 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-tmqwf"] Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.376174 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85d774bbbb-slpbz"] Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.408810 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppfd4\" (UniqueName: \"kubernetes.io/projected/484a7b64-42d3-45e0-a88e-17da8418f69e-kube-api-access-ppfd4\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.408891 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-run-httpd\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.408920 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-config-data\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.408973 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.409049 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-scripts\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.409126 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.409174 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-log-httpd\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.510860 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8795e117-1d7a-44d9-bd86-93fca918ec0e" path="/var/lib/kubelet/pods/8795e117-1d7a-44d9-bd86-93fca918ec0e/volumes" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.511394 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-scripts\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.511450 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.511487 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-log-httpd\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.511636 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppfd4\" (UniqueName: \"kubernetes.io/projected/484a7b64-42d3-45e0-a88e-17da8418f69e-kube-api-access-ppfd4\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.511659 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-run-httpd\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.511675 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-config-data\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.511690 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.512085 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14d933d-2d7e-43cf-a99d-d03035d13522" path="/var/lib/kubelet/pods/e14d933d-2d7e-43cf-a99d-d03035d13522/volumes" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.513526 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-run-httpd\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.513743 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-log-httpd\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.517905 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.519642 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-scripts\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.522234 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-config-data\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.525456 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.535100 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppfd4\" (UniqueName: \"kubernetes.io/projected/484a7b64-42d3-45e0-a88e-17da8418f69e-kube-api-access-ppfd4\") pod \"ceilometer-0\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.835607 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:22 crc kubenswrapper[4909]: I1126 07:19:22.988024 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6489c4db99-sc69l" event={"ID":"d79c0347-3494-4451-83b3-9919dd346f19","Type":"ContainerStarted","Data":"bb68cdcebf6c3e373dd1c0efde12a52465d23642dcdbe936042b900c1232fa2c"} Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.002736 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b88557586-pqm94" event={"ID":"c7e9a004-6dc5-44f0-9429-cca23187791b","Type":"ContainerStarted","Data":"c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b"} Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.002792 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b88557586-pqm94" event={"ID":"c7e9a004-6dc5-44f0-9429-cca23187791b","Type":"ContainerStarted","Data":"eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58"} Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.002805 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b88557586-pqm94" event={"ID":"c7e9a004-6dc5-44f0-9429-cca23187791b","Type":"ContainerStarted","Data":"f79563afbec79375eb8747d487492eecafad40a8e13205ac2b896f663e30e13b"} Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.003529 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.003570 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.011185 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" event={"ID":"7382debb-3dc4-4849-9109-5d415c6a196f","Type":"ContainerStarted","Data":"ad06187f176d483a88b61c58bef46d417ba51abeafb0a81f785e58cf8a7f3e9a"} Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.029041 4909 generic.go:334] "Generic (PLEG): container finished" podID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerID="96bf4023b231a1737ff7bdbef43367ddd232cbf376ab215cd6a62c332ffc23e4" exitCode=0 Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.029088 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" event={"ID":"3ecb4448-9622-4e65-bcf6-85ed4c003817","Type":"ContainerDied","Data":"96bf4023b231a1737ff7bdbef43367ddd232cbf376ab215cd6a62c332ffc23e4"} Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.029112 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" event={"ID":"3ecb4448-9622-4e65-bcf6-85ed4c003817","Type":"ContainerStarted","Data":"64dd36ae010e23c6fabcc43356b79fee22bc72bdae8fffaa553fe9c28e261492"} Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.037504 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b88557586-pqm94" podStartSLOduration=2.037482276 podStartE2EDuration="2.037482276s" podCreationTimestamp="2025-11-26 07:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:23.02627411 +0000 UTC m=+1135.172485276" watchObservedRunningTime="2025-11-26 07:19:23.037482276 +0000 UTC m=+1135.183693442" Nov 26 07:19:23 crc kubenswrapper[4909]: I1126 07:19:23.579243 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:23 crc kubenswrapper[4909]: W1126 07:19:23.604898 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod484a7b64_42d3_45e0_a88e_17da8418f69e.slice/crio-cac2c37f525f917f28d8fe340d60310c8a69e6bc824fe3a81345457a8e7cf153 WatchSource:0}: Error finding container cac2c37f525f917f28d8fe340d60310c8a69e6bc824fe3a81345457a8e7cf153: Status 404 returned error can't find the container with id cac2c37f525f917f28d8fe340d60310c8a69e6bc824fe3a81345457a8e7cf153 Nov 26 07:19:24 crc kubenswrapper[4909]: I1126 07:19:24.045012 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" event={"ID":"3ecb4448-9622-4e65-bcf6-85ed4c003817","Type":"ContainerStarted","Data":"c5beae108eb0a8ffe39f9b2d9b009462ad169a9ad25ca5f0243d5a0ce2e29ef2"} Nov 26 07:19:24 crc kubenswrapper[4909]: I1126 07:19:24.045335 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:24 crc kubenswrapper[4909]: I1126 07:19:24.048850 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerStarted","Data":"cac2c37f525f917f28d8fe340d60310c8a69e6bc824fe3a81345457a8e7cf153"} Nov 26 07:19:24 crc kubenswrapper[4909]: I1126 07:19:24.076323 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" podStartSLOduration=3.076300197 podStartE2EDuration="3.076300197s" podCreationTimestamp="2025-11-26 07:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:24.068135444 +0000 UTC m=+1136.214346620" watchObservedRunningTime="2025-11-26 07:19:24.076300197 +0000 UTC m=+1136.222511363" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.020889 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8597f74f8-cp26v"] Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.023367 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.027710 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.034790 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.039134 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8597f74f8-cp26v"] Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.080311 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.081539 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.082030 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kkn\" (UniqueName: \"kubernetes.io/projected/1746d8cc-9394-471e-a1c3-5471e65dfc73-kube-api-access-75kkn\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.082088 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data-custom\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.082163 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-internal-tls-certs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.082191 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-public-tls-certs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.082213 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-combined-ca-bundle\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.082258 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.082284 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1746d8cc-9394-471e-a1c3-5471e65dfc73-logs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.099790 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.099833 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.126712 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.128975 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.131964 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.138156 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.155576 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.184462 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.184535 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1746d8cc-9394-471e-a1c3-5471e65dfc73-logs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.184625 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kkn\" (UniqueName: \"kubernetes.io/projected/1746d8cc-9394-471e-a1c3-5471e65dfc73-kube-api-access-75kkn\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.184653 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data-custom\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.184800 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-internal-tls-certs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.184851 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-public-tls-certs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.184884 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-combined-ca-bundle\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.186681 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1746d8cc-9394-471e-a1c3-5471e65dfc73-logs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.188952 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-combined-ca-bundle\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.189081 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-internal-tls-certs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.189866 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-public-tls-certs\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.199511 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data-custom\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.200906 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.226155 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kkn\" (UniqueName: \"kubernetes.io/projected/1746d8cc-9394-471e-a1c3-5471e65dfc73-kube-api-access-75kkn\") pod \"barbican-api-8597f74f8-cp26v\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.350246 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:25 crc kubenswrapper[4909]: I1126 07:19:25.849118 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8597f74f8-cp26v"] Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.069138 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8597f74f8-cp26v" event={"ID":"1746d8cc-9394-471e-a1c3-5471e65dfc73","Type":"ContainerStarted","Data":"b0232145ed4b3712ecaad8243ac7d77b6582f6fbaac7a7c0a418835faaca93d0"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.069179 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8597f74f8-cp26v" event={"ID":"1746d8cc-9394-471e-a1c3-5471e65dfc73","Type":"ContainerStarted","Data":"0c4c7977ab4eeaa306f1e287c8fad2c1f497474a57492a16ae989c545aff5a22"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.073149 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerStarted","Data":"6f3aa67aca68458a7a4a042f1beb28fb1ef9115c042193e380388e91aef5f52b"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.073206 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerStarted","Data":"be07160d076c091e4ccb91e01cf60c7ecc0ce620bd4957184b2199da6a0ccc24"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.075494 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6489c4db99-sc69l" event={"ID":"d79c0347-3494-4451-83b3-9919dd346f19","Type":"ContainerStarted","Data":"8d96446636d1c32bd33935c854eafae7f92ad00599e940eb16e0ee0ad1233ddc"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.075626 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6489c4db99-sc69l" event={"ID":"d79c0347-3494-4451-83b3-9919dd346f19","Type":"ContainerStarted","Data":"60fee10fbe2a728536d6c3503ed6b4b03afa53a6b02b3aa79491273e4536b60d"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.077983 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" event={"ID":"7382debb-3dc4-4849-9109-5d415c6a196f","Type":"ContainerStarted","Data":"448123d43785c504b22d8f7af78abefd489be91367d82b1fa2c04ab27f96653f"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.078084 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" event={"ID":"7382debb-3dc4-4849-9109-5d415c6a196f","Type":"ContainerStarted","Data":"06b62a49cf46e07b4f7ce61be83af8cadb8ad382322ec95765c2a554c930637b"} Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.078209 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.078446 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.078516 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.078571 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.101684 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6489c4db99-sc69l" podStartSLOduration=2.6112524219999997 podStartE2EDuration="5.10166811s" podCreationTimestamp="2025-11-26 07:19:21 +0000 UTC" firstStartedPulling="2025-11-26 07:19:22.203924903 +0000 UTC m=+1134.350136069" lastFinishedPulling="2025-11-26 07:19:24.694340591 +0000 UTC m=+1136.840551757" observedRunningTime="2025-11-26 07:19:26.100786545 +0000 UTC m=+1138.246997711" watchObservedRunningTime="2025-11-26 07:19:26.10166811 +0000 UTC m=+1138.247879276" Nov 26 07:19:26 crc kubenswrapper[4909]: I1126 07:19:26.135789 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" podStartSLOduration=2.800250986 podStartE2EDuration="5.135766911s" podCreationTimestamp="2025-11-26 07:19:21 +0000 UTC" firstStartedPulling="2025-11-26 07:19:22.358936399 +0000 UTC m=+1134.505147565" lastFinishedPulling="2025-11-26 07:19:24.694452324 +0000 UTC m=+1136.840663490" observedRunningTime="2025-11-26 07:19:26.123098455 +0000 UTC m=+1138.269309631" watchObservedRunningTime="2025-11-26 07:19:26.135766911 +0000 UTC m=+1138.281978087" Nov 26 07:19:27 crc kubenswrapper[4909]: I1126 07:19:27.087165 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerStarted","Data":"7d6888613cd1059726eb50ab5b357938fe0b9affb8436543d287c19b8ea85a24"} Nov 26 07:19:27 crc kubenswrapper[4909]: I1126 07:19:27.092810 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8597f74f8-cp26v" event={"ID":"1746d8cc-9394-471e-a1c3-5471e65dfc73","Type":"ContainerStarted","Data":"18c69a50eb20ebbdeb9c3c4cec5b96f232a261f134a17ae2bf389ddcaf0b29a6"} Nov 26 07:19:27 crc kubenswrapper[4909]: I1126 07:19:27.093259 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:27 crc kubenswrapper[4909]: I1126 07:19:27.093336 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:27 crc kubenswrapper[4909]: I1126 07:19:27.115205 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8597f74f8-cp26v" podStartSLOduration=3.115184158 podStartE2EDuration="3.115184158s" podCreationTimestamp="2025-11-26 07:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:27.113616356 +0000 UTC m=+1139.259827522" watchObservedRunningTime="2025-11-26 07:19:27.115184158 +0000 UTC m=+1139.261395324" Nov 26 07:19:28 crc kubenswrapper[4909]: I1126 07:19:28.471338 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 07:19:28 crc kubenswrapper[4909]: I1126 07:19:28.471902 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:19:28 crc kubenswrapper[4909]: I1126 07:19:28.577673 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 07:19:28 crc kubenswrapper[4909]: I1126 07:19:28.676355 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:28 crc kubenswrapper[4909]: I1126 07:19:28.676737 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:19:28 crc kubenswrapper[4909]: I1126 07:19:28.831854 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.134195 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerStarted","Data":"a745837f49e58d5b5adc128162ab404a5459511e51907aad5db346ac73a29e1b"} Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.134349 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.141491 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pp2qf" event={"ID":"94fb3d6d-c540-4c6d-af4d-257226561c47","Type":"ContainerStarted","Data":"ef965809615405e1783c11f175843f12ff2d4725c7daecb6e6327caf95f0a466"} Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.156664 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.823136109 podStartE2EDuration="7.156650651s" podCreationTimestamp="2025-11-26 07:19:22 +0000 UTC" firstStartedPulling="2025-11-26 07:19:23.608666771 +0000 UTC m=+1135.754877947" lastFinishedPulling="2025-11-26 07:19:27.942181323 +0000 UTC m=+1140.088392489" observedRunningTime="2025-11-26 07:19:29.15510747 +0000 UTC m=+1141.301318636" watchObservedRunningTime="2025-11-26 07:19:29.156650651 +0000 UTC m=+1141.302861817" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.195249 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pp2qf" podStartSLOduration=3.485372354 podStartE2EDuration="39.195223315s" podCreationTimestamp="2025-11-26 07:18:50 +0000 UTC" firstStartedPulling="2025-11-26 07:18:52.235455257 +0000 UTC m=+1104.381666423" lastFinishedPulling="2025-11-26 07:19:27.945306198 +0000 UTC m=+1140.091517384" observedRunningTime="2025-11-26 07:19:29.172480014 +0000 UTC m=+1141.318691180" watchObservedRunningTime="2025-11-26 07:19:29.195223315 +0000 UTC m=+1141.341434501" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.680351 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.681666 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.685273 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.685434 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wxbjg" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.685999 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.695737 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.801997 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config-secret\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.802076 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fcz\" (UniqueName: \"kubernetes.io/projected/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-kube-api-access-f5fcz\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.802103 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.802154 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.904024 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5fcz\" (UniqueName: \"kubernetes.io/projected/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-kube-api-access-f5fcz\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.904076 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.904136 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.904198 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config-secret\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.905175 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.910298 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config-secret\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.911430 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:29 crc kubenswrapper[4909]: I1126 07:19:29.926020 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5fcz\" (UniqueName: \"kubernetes.io/projected/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-kube-api-access-f5fcz\") pod \"openstackclient\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " pod="openstack/openstackclient" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.040162 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.295915 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-567d49d699-wbzsj"] Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.301078 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.307970 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.308250 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.308363 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.319966 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-567d49d699-wbzsj"] Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413367 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-config-data\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413426 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-public-tls-certs\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413451 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-combined-ca-bundle\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413527 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-log-httpd\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413584 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-etc-swift\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413665 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-run-httpd\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413713 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn456\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-kube-api-access-kn456\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.413730 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-internal-tls-certs\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515469 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-config-data\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515510 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-public-tls-certs\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515535 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-combined-ca-bundle\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515570 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-log-httpd\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515617 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-etc-swift\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515652 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-run-httpd\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515687 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn456\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-kube-api-access-kn456\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.515710 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-internal-tls-certs\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.516771 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-run-httpd\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.517018 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-log-httpd\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.526098 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-internal-tls-certs\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.526471 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-config-data\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.527174 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-public-tls-certs\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.529091 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-etc-swift\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.530970 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-combined-ca-bundle\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.546740 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.556233 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn456\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-kube-api-access-kn456\") pod \"swift-proxy-567d49d699-wbzsj\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:30 crc kubenswrapper[4909]: I1126 07:19:30.623326 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.149844 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-567d49d699-wbzsj"] Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.186282 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7","Type":"ContainerStarted","Data":"906de5e9d6e4b24e066cf67c92541650b821691f9f46f1844dea83f45b935ff0"} Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.306635 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.307478 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-central-agent" containerID="cri-o://be07160d076c091e4ccb91e01cf60c7ecc0ce620bd4957184b2199da6a0ccc24" gracePeriod=30 Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.307818 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="proxy-httpd" containerID="cri-o://a745837f49e58d5b5adc128162ab404a5459511e51907aad5db346ac73a29e1b" gracePeriod=30 Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.308136 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-notification-agent" containerID="cri-o://6f3aa67aca68458a7a4a042f1beb28fb1ef9115c042193e380388e91aef5f52b" gracePeriod=30 Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.308199 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="sg-core" containerID="cri-o://7d6888613cd1059726eb50ab5b357938fe0b9affb8436543d287c19b8ea85a24" gracePeriod=30 Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.639778 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.701003 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-bd8tw"] Nov 26 07:19:31 crc kubenswrapper[4909]: I1126 07:19:31.701271 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" podUID="951c976a-6ae8-4801-8c3c-de061e016828" containerName="dnsmasq-dns" containerID="cri-o://456a094735a54772ec296b5e866c2151739dd521bfe320f46949d078901921bf" gracePeriod=10 Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.238874 4909 generic.go:334] "Generic (PLEG): container finished" podID="951c976a-6ae8-4801-8c3c-de061e016828" containerID="456a094735a54772ec296b5e866c2151739dd521bfe320f46949d078901921bf" exitCode=0 Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.239207 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" event={"ID":"951c976a-6ae8-4801-8c3c-de061e016828","Type":"ContainerDied","Data":"456a094735a54772ec296b5e866c2151739dd521bfe320f46949d078901921bf"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.241803 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-567d49d699-wbzsj" event={"ID":"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32","Type":"ContainerStarted","Data":"880e17b02d4c9dab1267c346c106365cd7c194623167e4792837a0d418d59a8f"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.241836 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-567d49d699-wbzsj" event={"ID":"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32","Type":"ContainerStarted","Data":"198a28e1346d287b3f7810156c663f6c5e14b09f746144f9f4aee54002dbd0a2"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.241845 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-567d49d699-wbzsj" event={"ID":"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32","Type":"ContainerStarted","Data":"4fd891ac62fd450211d95085d96ee6dfb80d84d55ac3f44183d0b65a8979e7df"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.242854 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.242877 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246085 4909 generic.go:334] "Generic (PLEG): container finished" podID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerID="a745837f49e58d5b5adc128162ab404a5459511e51907aad5db346ac73a29e1b" exitCode=0 Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246112 4909 generic.go:334] "Generic (PLEG): container finished" podID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerID="7d6888613cd1059726eb50ab5b357938fe0b9affb8436543d287c19b8ea85a24" exitCode=2 Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246120 4909 generic.go:334] "Generic (PLEG): container finished" podID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerID="6f3aa67aca68458a7a4a042f1beb28fb1ef9115c042193e380388e91aef5f52b" exitCode=0 Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246129 4909 generic.go:334] "Generic (PLEG): container finished" podID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerID="be07160d076c091e4ccb91e01cf60c7ecc0ce620bd4957184b2199da6a0ccc24" exitCode=0 Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246148 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerDied","Data":"a745837f49e58d5b5adc128162ab404a5459511e51907aad5db346ac73a29e1b"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246172 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerDied","Data":"7d6888613cd1059726eb50ab5b357938fe0b9affb8436543d287c19b8ea85a24"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246183 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerDied","Data":"6f3aa67aca68458a7a4a042f1beb28fb1ef9115c042193e380388e91aef5f52b"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.246193 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerDied","Data":"be07160d076c091e4ccb91e01cf60c7ecc0ce620bd4957184b2199da6a0ccc24"} Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.264199 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-567d49d699-wbzsj" podStartSLOduration=2.264178608 podStartE2EDuration="2.264178608s" podCreationTimestamp="2025-11-26 07:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:32.260522869 +0000 UTC m=+1144.406734035" watchObservedRunningTime="2025-11-26 07:19:32.264178608 +0000 UTC m=+1144.410389764" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.456572 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.466037 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482562 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppfd4\" (UniqueName: \"kubernetes.io/projected/484a7b64-42d3-45e0-a88e-17da8418f69e-kube-api-access-ppfd4\") pod \"484a7b64-42d3-45e0-a88e-17da8418f69e\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482617 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-config-data\") pod \"484a7b64-42d3-45e0-a88e-17da8418f69e\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482641 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-svc\") pod \"951c976a-6ae8-4801-8c3c-de061e016828\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482676 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-swift-storage-0\") pod \"951c976a-6ae8-4801-8c3c-de061e016828\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482709 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-nb\") pod \"951c976a-6ae8-4801-8c3c-de061e016828\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482749 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-log-httpd\") pod \"484a7b64-42d3-45e0-a88e-17da8418f69e\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482790 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-sb\") pod \"951c976a-6ae8-4801-8c3c-de061e016828\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482820 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj89h\" (UniqueName: \"kubernetes.io/projected/951c976a-6ae8-4801-8c3c-de061e016828-kube-api-access-kj89h\") pod \"951c976a-6ae8-4801-8c3c-de061e016828\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482847 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-combined-ca-bundle\") pod \"484a7b64-42d3-45e0-a88e-17da8418f69e\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482866 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-sg-core-conf-yaml\") pod \"484a7b64-42d3-45e0-a88e-17da8418f69e\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482885 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-run-httpd\") pod \"484a7b64-42d3-45e0-a88e-17da8418f69e\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482909 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-scripts\") pod \"484a7b64-42d3-45e0-a88e-17da8418f69e\" (UID: \"484a7b64-42d3-45e0-a88e-17da8418f69e\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.482940 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-config\") pod \"951c976a-6ae8-4801-8c3c-de061e016828\" (UID: \"951c976a-6ae8-4801-8c3c-de061e016828\") " Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.488286 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "484a7b64-42d3-45e0-a88e-17da8418f69e" (UID: "484a7b64-42d3-45e0-a88e-17da8418f69e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.499188 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484a7b64-42d3-45e0-a88e-17da8418f69e-kube-api-access-ppfd4" (OuterVolumeSpecName: "kube-api-access-ppfd4") pod "484a7b64-42d3-45e0-a88e-17da8418f69e" (UID: "484a7b64-42d3-45e0-a88e-17da8418f69e"). InnerVolumeSpecName "kube-api-access-ppfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.511660 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951c976a-6ae8-4801-8c3c-de061e016828-kube-api-access-kj89h" (OuterVolumeSpecName: "kube-api-access-kj89h") pod "951c976a-6ae8-4801-8c3c-de061e016828" (UID: "951c976a-6ae8-4801-8c3c-de061e016828"). InnerVolumeSpecName "kube-api-access-kj89h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.525729 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-scripts" (OuterVolumeSpecName: "scripts") pod "484a7b64-42d3-45e0-a88e-17da8418f69e" (UID: "484a7b64-42d3-45e0-a88e-17da8418f69e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.534097 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "484a7b64-42d3-45e0-a88e-17da8418f69e" (UID: "484a7b64-42d3-45e0-a88e-17da8418f69e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.583256 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "951c976a-6ae8-4801-8c3c-de061e016828" (UID: "951c976a-6ae8-4801-8c3c-de061e016828"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.585634 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.585656 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.585665 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppfd4\" (UniqueName: \"kubernetes.io/projected/484a7b64-42d3-45e0-a88e-17da8418f69e-kube-api-access-ppfd4\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.585674 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.585683 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/484a7b64-42d3-45e0-a88e-17da8418f69e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.585690 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj89h\" (UniqueName: \"kubernetes.io/projected/951c976a-6ae8-4801-8c3c-de061e016828-kube-api-access-kj89h\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.586749 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "951c976a-6ae8-4801-8c3c-de061e016828" (UID: "951c976a-6ae8-4801-8c3c-de061e016828"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.603976 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "484a7b64-42d3-45e0-a88e-17da8418f69e" (UID: "484a7b64-42d3-45e0-a88e-17da8418f69e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.617105 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "951c976a-6ae8-4801-8c3c-de061e016828" (UID: "951c976a-6ae8-4801-8c3c-de061e016828"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.618688 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-config" (OuterVolumeSpecName: "config") pod "951c976a-6ae8-4801-8c3c-de061e016828" (UID: "951c976a-6ae8-4801-8c3c-de061e016828"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.620363 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "951c976a-6ae8-4801-8c3c-de061e016828" (UID: "951c976a-6ae8-4801-8c3c-de061e016828"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.677769 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "484a7b64-42d3-45e0-a88e-17da8418f69e" (UID: "484a7b64-42d3-45e0-a88e-17da8418f69e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.690758 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.690788 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.690806 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.690814 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.690823 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.690832 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/951c976a-6ae8-4801-8c3c-de061e016828-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.706729 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-config-data" (OuterVolumeSpecName: "config-data") pod "484a7b64-42d3-45e0-a88e-17da8418f69e" (UID: "484a7b64-42d3-45e0-a88e-17da8418f69e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:32 crc kubenswrapper[4909]: I1126 07:19:32.792912 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/484a7b64-42d3-45e0-a88e-17da8418f69e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.255341 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" event={"ID":"951c976a-6ae8-4801-8c3c-de061e016828","Type":"ContainerDied","Data":"84313ab6b40d8b2dcbc3929485526324b597c63740760cf2030163f2dc082076"} Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.255390 4909 scope.go:117] "RemoveContainer" containerID="456a094735a54772ec296b5e866c2151739dd521bfe320f46949d078901921bf" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.255485 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.262327 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"484a7b64-42d3-45e0-a88e-17da8418f69e","Type":"ContainerDied","Data":"cac2c37f525f917f28d8fe340d60310c8a69e6bc824fe3a81345457a8e7cf153"} Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.262372 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.281099 4909 scope.go:117] "RemoveContainer" containerID="1b157177c000c40a6da3a9662a9c02a2e5fdf1ff9e3f3f039271b3801854b02c" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.301072 4909 scope.go:117] "RemoveContainer" containerID="a745837f49e58d5b5adc128162ab404a5459511e51907aad5db346ac73a29e1b" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.313483 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-bd8tw"] Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.324647 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-bd8tw"] Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.340217 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.352411 4909 scope.go:117] "RemoveContainer" containerID="7d6888613cd1059726eb50ab5b357938fe0b9affb8436543d287c19b8ea85a24" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.354492 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.368672 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:33 crc kubenswrapper[4909]: E1126 07:19:33.369047 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="proxy-httpd" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369062 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="proxy-httpd" Nov 26 07:19:33 crc kubenswrapper[4909]: E1126 07:19:33.369071 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951c976a-6ae8-4801-8c3c-de061e016828" containerName="init" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369077 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="951c976a-6ae8-4801-8c3c-de061e016828" containerName="init" Nov 26 07:19:33 crc kubenswrapper[4909]: E1126 07:19:33.369090 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-central-agent" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369096 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-central-agent" Nov 26 07:19:33 crc kubenswrapper[4909]: E1126 07:19:33.369113 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-notification-agent" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369119 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-notification-agent" Nov 26 07:19:33 crc kubenswrapper[4909]: E1126 07:19:33.369136 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951c976a-6ae8-4801-8c3c-de061e016828" containerName="dnsmasq-dns" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369141 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="951c976a-6ae8-4801-8c3c-de061e016828" containerName="dnsmasq-dns" Nov 26 07:19:33 crc kubenswrapper[4909]: E1126 07:19:33.369159 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="sg-core" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369165 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="sg-core" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369317 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="951c976a-6ae8-4801-8c3c-de061e016828" containerName="dnsmasq-dns" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369327 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="sg-core" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369340 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-notification-agent" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369353 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="ceilometer-central-agent" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.369369 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" containerName="proxy-httpd" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.370839 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.375690 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.381988 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.382047 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.407605 4909 scope.go:117] "RemoveContainer" containerID="6f3aa67aca68458a7a4a042f1beb28fb1ef9115c042193e380388e91aef5f52b" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.511671 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-run-httpd\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.511975 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-config-data\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.511997 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.512099 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.512165 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-log-httpd\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.512182 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-scripts\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.512206 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rct2h\" (UniqueName: \"kubernetes.io/projected/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-kube-api-access-rct2h\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.512641 4909 scope.go:117] "RemoveContainer" containerID="be07160d076c091e4ccb91e01cf60c7ecc0ce620bd4957184b2199da6a0ccc24" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.613883 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.613982 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-log-httpd\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.614001 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-scripts\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.614035 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rct2h\" (UniqueName: \"kubernetes.io/projected/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-kube-api-access-rct2h\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.614067 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-run-httpd\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.614101 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-config-data\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.614118 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.614491 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-log-httpd\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.614695 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-run-httpd\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.620894 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-scripts\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.621189 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.623999 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-config-data\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.624922 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.645218 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rct2h\" (UniqueName: \"kubernetes.io/projected/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-kube-api-access-rct2h\") pod \"ceilometer-0\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " pod="openstack/ceilometer-0" Nov 26 07:19:33 crc kubenswrapper[4909]: I1126 07:19:33.706120 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:34 crc kubenswrapper[4909]: I1126 07:19:34.084101 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:34 crc kubenswrapper[4909]: I1126 07:19:34.146358 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:34 crc kubenswrapper[4909]: I1126 07:19:34.178658 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:34 crc kubenswrapper[4909]: I1126 07:19:34.276746 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerStarted","Data":"a866662b6d0a61960f31b3dac81d21249d3af9cde469ddf651769b52e5de0c5c"} Nov 26 07:19:34 crc kubenswrapper[4909]: I1126 07:19:34.545777 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484a7b64-42d3-45e0-a88e-17da8418f69e" path="/var/lib/kubelet/pods/484a7b64-42d3-45e0-a88e-17da8418f69e/volumes" Nov 26 07:19:34 crc kubenswrapper[4909]: I1126 07:19:34.547029 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951c976a-6ae8-4801-8c3c-de061e016828" path="/var/lib/kubelet/pods/951c976a-6ae8-4801-8c3c-de061e016828/volumes" Nov 26 07:19:35 crc kubenswrapper[4909]: I1126 07:19:35.292122 4909 generic.go:334] "Generic (PLEG): container finished" podID="94fb3d6d-c540-4c6d-af4d-257226561c47" containerID="ef965809615405e1783c11f175843f12ff2d4725c7daecb6e6327caf95f0a466" exitCode=0 Nov 26 07:19:35 crc kubenswrapper[4909]: I1126 07:19:35.292188 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pp2qf" event={"ID":"94fb3d6d-c540-4c6d-af4d-257226561c47","Type":"ContainerDied","Data":"ef965809615405e1783c11f175843f12ff2d4725c7daecb6e6327caf95f0a466"} Nov 26 07:19:35 crc kubenswrapper[4909]: I1126 07:19:35.294453 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerStarted","Data":"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0"} Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.308262 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerStarted","Data":"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9"} Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.683957 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.885330 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/94fb3d6d-c540-4c6d-af4d-257226561c47-etc-machine-id\") pod \"94fb3d6d-c540-4c6d-af4d-257226561c47\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.885426 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-combined-ca-bundle\") pod \"94fb3d6d-c540-4c6d-af4d-257226561c47\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.885453 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-config-data\") pod \"94fb3d6d-c540-4c6d-af4d-257226561c47\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.885471 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-scripts\") pod \"94fb3d6d-c540-4c6d-af4d-257226561c47\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.885493 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrlsz\" (UniqueName: \"kubernetes.io/projected/94fb3d6d-c540-4c6d-af4d-257226561c47-kube-api-access-lrlsz\") pod \"94fb3d6d-c540-4c6d-af4d-257226561c47\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.885553 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-db-sync-config-data\") pod \"94fb3d6d-c540-4c6d-af4d-257226561c47\" (UID: \"94fb3d6d-c540-4c6d-af4d-257226561c47\") " Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.885497 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94fb3d6d-c540-4c6d-af4d-257226561c47-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "94fb3d6d-c540-4c6d-af4d-257226561c47" (UID: "94fb3d6d-c540-4c6d-af4d-257226561c47"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.893717 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-scripts" (OuterVolumeSpecName: "scripts") pod "94fb3d6d-c540-4c6d-af4d-257226561c47" (UID: "94fb3d6d-c540-4c6d-af4d-257226561c47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.893782 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "94fb3d6d-c540-4c6d-af4d-257226561c47" (UID: "94fb3d6d-c540-4c6d-af4d-257226561c47"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.895776 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fb3d6d-c540-4c6d-af4d-257226561c47-kube-api-access-lrlsz" (OuterVolumeSpecName: "kube-api-access-lrlsz") pod "94fb3d6d-c540-4c6d-af4d-257226561c47" (UID: "94fb3d6d-c540-4c6d-af4d-257226561c47"). InnerVolumeSpecName "kube-api-access-lrlsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.919866 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94fb3d6d-c540-4c6d-af4d-257226561c47" (UID: "94fb3d6d-c540-4c6d-af4d-257226561c47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.957066 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.987880 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.987924 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.987937 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrlsz\" (UniqueName: \"kubernetes.io/projected/94fb3d6d-c540-4c6d-af4d-257226561c47-kube-api-access-lrlsz\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.987948 4909 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.987959 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/94fb3d6d-c540-4c6d-af4d-257226561c47-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:36 crc kubenswrapper[4909]: I1126 07:19:36.997274 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-config-data" (OuterVolumeSpecName: "config-data") pod "94fb3d6d-c540-4c6d-af4d-257226561c47" (UID: "94fb3d6d-c540-4c6d-af4d-257226561c47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.078576 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-bd8tw" podUID="951c976a-6ae8-4801-8c3c-de061e016828" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.090292 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb3d6d-c540-4c6d-af4d-257226561c47-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.163964 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.253037 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b88557586-pqm94"] Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.256443 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b88557586-pqm94" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api-log" containerID="cri-o://eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58" gracePeriod=30 Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.256727 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b88557586-pqm94" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api" containerID="cri-o://c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b" gracePeriod=30 Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.303051 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.303110 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.303156 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.348228 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pp2qf" event={"ID":"94fb3d6d-c540-4c6d-af4d-257226561c47","Type":"ContainerDied","Data":"b2e9973be13dfb22210d57d93d1ec344ebe3743456b477a4b32c2ef95f2d58c9"} Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.348272 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2e9973be13dfb22210d57d93d1ec344ebe3743456b477a4b32c2ef95f2d58c9" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.348340 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pp2qf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.388139 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerStarted","Data":"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf"} Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.406126 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca3ef3a41105acb11f5f44a2c705bdfdb176056de55b77bbb72e7524ee7071fd"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.406216 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://ca3ef3a41105acb11f5f44a2c705bdfdb176056de55b77bbb72e7524ee7071fd" gracePeriod=600 Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.600342 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:37 crc kubenswrapper[4909]: E1126 07:19:37.600825 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fb3d6d-c540-4c6d-af4d-257226561c47" containerName="cinder-db-sync" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.600844 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fb3d6d-c540-4c6d-af4d-257226561c47" containerName="cinder-db-sync" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.601068 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fb3d6d-c540-4c6d-af4d-257226561c47" containerName="cinder-db-sync" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.602049 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.605216 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vhhwz" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.605336 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.605477 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.605258 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.627821 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.669704 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-brwhf"] Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.671482 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.698667 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-brwhf"] Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.705670 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.705739 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.705835 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.705854 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlkdx\" (UniqueName: \"kubernetes.io/projected/2aec08aa-41f2-437a-8b51-a4d065dc4856-kube-api-access-mlkdx\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.705887 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2aec08aa-41f2-437a-8b51-a4d065dc4856-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.705908 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-scripts\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.810899 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.810955 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-svc\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811020 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtn9\" (UniqueName: \"kubernetes.io/projected/59b07954-e19a-4f32-af95-f1e1de784683-kube-api-access-nhtn9\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811077 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811107 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811174 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-config\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811245 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlkdx\" (UniqueName: \"kubernetes.io/projected/2aec08aa-41f2-437a-8b51-a4d065dc4856-kube-api-access-mlkdx\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811273 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811317 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2aec08aa-41f2-437a-8b51-a4d065dc4856-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811340 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-scripts\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.811356 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.817710 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.817805 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2aec08aa-41f2-437a-8b51-a4d065dc4856-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.825445 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.831412 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.832845 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.844189 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.846347 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.849192 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-scripts\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.850956 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.863363 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlkdx\" (UniqueName: \"kubernetes.io/projected/2aec08aa-41f2-437a-8b51-a4d065dc4856-kube-api-access-mlkdx\") pod \"cinder-scheduler-0\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.913511 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-config\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.913624 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.913663 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.913706 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.913727 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-svc\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.913774 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhtn9\" (UniqueName: \"kubernetes.io/projected/59b07954-e19a-4f32-af95-f1e1de784683-kube-api-access-nhtn9\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.916150 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-config\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.916228 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-svc\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.918835 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.919175 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.919471 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.938059 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhtn9\" (UniqueName: \"kubernetes.io/projected/59b07954-e19a-4f32-af95-f1e1de784683-kube-api-access-nhtn9\") pod \"dnsmasq-dns-5784cf869f-brwhf\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.957655 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:19:37 crc kubenswrapper[4909]: I1126 07:19:37.972943 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.015336 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data-custom\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.015695 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503d0d1-cf4e-459a-b928-f92afd8368d5-logs\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.015795 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8468q\" (UniqueName: \"kubernetes.io/projected/f503d0d1-cf4e-459a-b928-f92afd8368d5-kube-api-access-8468q\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.015844 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503d0d1-cf4e-459a-b928-f92afd8368d5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.015890 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-scripts\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.015914 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.015955 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.117739 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data-custom\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.117791 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503d0d1-cf4e-459a-b928-f92afd8368d5-logs\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.117884 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8468q\" (UniqueName: \"kubernetes.io/projected/f503d0d1-cf4e-459a-b928-f92afd8368d5-kube-api-access-8468q\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.117920 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503d0d1-cf4e-459a-b928-f92afd8368d5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.117956 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-scripts\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.117971 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.117998 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.120020 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503d0d1-cf4e-459a-b928-f92afd8368d5-logs\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.120816 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503d0d1-cf4e-459a-b928-f92afd8368d5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.125288 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.125460 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-scripts\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.126285 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data-custom\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.127117 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.143424 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8468q\" (UniqueName: \"kubernetes.io/projected/f503d0d1-cf4e-459a-b928-f92afd8368d5-kube-api-access-8468q\") pod \"cinder-api-0\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.320667 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.406517 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerStarted","Data":"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e"} Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.406837 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.414015 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="ca3ef3a41105acb11f5f44a2c705bdfdb176056de55b77bbb72e7524ee7071fd" exitCode=0 Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.414086 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"ca3ef3a41105acb11f5f44a2c705bdfdb176056de55b77bbb72e7524ee7071fd"} Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.414115 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"01a4d185a8d7c30690fef08cf37e1461869ce637ebe4ac1e55eebb9783625426"} Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.414133 4909 scope.go:117] "RemoveContainer" containerID="e56f7341ca39cd31863ba15982a9b0b7165f8ceb520eefc8a8ee6734e9f64390" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.424728 4909 generic.go:334] "Generic (PLEG): container finished" podID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerID="eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58" exitCode=143 Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.424771 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b88557586-pqm94" event={"ID":"c7e9a004-6dc5-44f0-9429-cca23187791b","Type":"ContainerDied","Data":"eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58"} Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.441239 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.863985675 podStartE2EDuration="5.441221275s" podCreationTimestamp="2025-11-26 07:19:33 +0000 UTC" firstStartedPulling="2025-11-26 07:19:34.193519378 +0000 UTC m=+1146.339730544" lastFinishedPulling="2025-11-26 07:19:37.770754978 +0000 UTC m=+1149.916966144" observedRunningTime="2025-11-26 07:19:38.432206558 +0000 UTC m=+1150.578417724" watchObservedRunningTime="2025-11-26 07:19:38.441221275 +0000 UTC m=+1150.587432441" Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.582253 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.782884 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-brwhf"] Nov 26 07:19:38 crc kubenswrapper[4909]: I1126 07:19:38.928358 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:39 crc kubenswrapper[4909]: I1126 07:19:39.443464 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2aec08aa-41f2-437a-8b51-a4d065dc4856","Type":"ContainerStarted","Data":"ae755e468f645f767afab399f2eddb1cbec667d417e94321b4d7c2df67169642"} Nov 26 07:19:39 crc kubenswrapper[4909]: I1126 07:19:39.452453 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503d0d1-cf4e-459a-b928-f92afd8368d5","Type":"ContainerStarted","Data":"e95a03508d7e5707dafed14a4267836486a3ac980d7c4e922f64be88cb65cccb"} Nov 26 07:19:39 crc kubenswrapper[4909]: I1126 07:19:39.460507 4909 generic.go:334] "Generic (PLEG): container finished" podID="59b07954-e19a-4f32-af95-f1e1de784683" containerID="d892db28243a047dd6234031566953da43ace32bce833effe48bc4f81f8c8ca6" exitCode=0 Nov 26 07:19:39 crc kubenswrapper[4909]: I1126 07:19:39.460545 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" event={"ID":"59b07954-e19a-4f32-af95-f1e1de784683","Type":"ContainerDied","Data":"d892db28243a047dd6234031566953da43ace32bce833effe48bc4f81f8c8ca6"} Nov 26 07:19:39 crc kubenswrapper[4909]: I1126 07:19:39.460609 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" event={"ID":"59b07954-e19a-4f32-af95-f1e1de784683","Type":"ContainerStarted","Data":"ec41fca4e5fe6c058ec27f30cee66a7ed8cbab3198e1a5f1f852d0e9a18574e8"} Nov 26 07:19:39 crc kubenswrapper[4909]: I1126 07:19:39.877415 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.214292 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.487874 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" event={"ID":"59b07954-e19a-4f32-af95-f1e1de784683","Type":"ContainerStarted","Data":"247b342ffc64904df65f2383aee5ca8a0188634c09e09fc107fb61f43ce0f4b1"} Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.488179 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.497617 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2aec08aa-41f2-437a-8b51-a4d065dc4856","Type":"ContainerStarted","Data":"ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d"} Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.514003 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" podStartSLOduration=3.513982392 podStartE2EDuration="3.513982392s" podCreationTimestamp="2025-11-26 07:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:40.513818638 +0000 UTC m=+1152.660029804" watchObservedRunningTime="2025-11-26 07:19:40.513982392 +0000 UTC m=+1152.660193568" Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.518528 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-central-agent" containerID="cri-o://33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" gracePeriod=30 Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.519046 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="proxy-httpd" containerID="cri-o://3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" gracePeriod=30 Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.519098 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="sg-core" containerID="cri-o://733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" gracePeriod=30 Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.519131 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-notification-agent" containerID="cri-o://e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" gracePeriod=30 Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.553436 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503d0d1-cf4e-459a-b928-f92afd8368d5","Type":"ContainerStarted","Data":"23a2d29c37454e96f439b04a2b47d4669d0de0721240be366937127da50d0d42"} Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.630747 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:40 crc kubenswrapper[4909]: I1126 07:19:40.637393 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.308105 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.442123 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data-custom\") pod \"c7e9a004-6dc5-44f0-9429-cca23187791b\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.442262 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2rsl\" (UniqueName: \"kubernetes.io/projected/c7e9a004-6dc5-44f0-9429-cca23187791b-kube-api-access-w2rsl\") pod \"c7e9a004-6dc5-44f0-9429-cca23187791b\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.442295 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data\") pod \"c7e9a004-6dc5-44f0-9429-cca23187791b\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.442345 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e9a004-6dc5-44f0-9429-cca23187791b-logs\") pod \"c7e9a004-6dc5-44f0-9429-cca23187791b\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.442364 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-combined-ca-bundle\") pod \"c7e9a004-6dc5-44f0-9429-cca23187791b\" (UID: \"c7e9a004-6dc5-44f0-9429-cca23187791b\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.443446 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e9a004-6dc5-44f0-9429-cca23187791b-logs" (OuterVolumeSpecName: "logs") pod "c7e9a004-6dc5-44f0-9429-cca23187791b" (UID: "c7e9a004-6dc5-44f0-9429-cca23187791b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.451526 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c7e9a004-6dc5-44f0-9429-cca23187791b" (UID: "c7e9a004-6dc5-44f0-9429-cca23187791b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.459079 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e9a004-6dc5-44f0-9429-cca23187791b-kube-api-access-w2rsl" (OuterVolumeSpecName: "kube-api-access-w2rsl") pod "c7e9a004-6dc5-44f0-9429-cca23187791b" (UID: "c7e9a004-6dc5-44f0-9429-cca23187791b"). InnerVolumeSpecName "kube-api-access-w2rsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.478411 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.505865 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e9a004-6dc5-44f0-9429-cca23187791b" (UID: "c7e9a004-6dc5-44f0-9429-cca23187791b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.535730 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data" (OuterVolumeSpecName: "config-data") pod "c7e9a004-6dc5-44f0-9429-cca23187791b" (UID: "c7e9a004-6dc5-44f0-9429-cca23187791b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.548979 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2rsl\" (UniqueName: \"kubernetes.io/projected/c7e9a004-6dc5-44f0-9429-cca23187791b-kube-api-access-w2rsl\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.549010 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.549021 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e9a004-6dc5-44f0-9429-cca23187791b-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.549031 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.549040 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e9a004-6dc5-44f0-9429-cca23187791b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.552294 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2aec08aa-41f2-437a-8b51-a4d065dc4856","Type":"ContainerStarted","Data":"1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.561111 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503d0d1-cf4e-459a-b928-f92afd8368d5","Type":"ContainerStarted","Data":"b839c51d24d478c6f7088fd698b41d9b8abf9631f25af2b20b31614e8c759b5d"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.561233 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api-log" containerID="cri-o://23a2d29c37454e96f439b04a2b47d4669d0de0721240be366937127da50d0d42" gracePeriod=30 Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.561257 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api" containerID="cri-o://b839c51d24d478c6f7088fd698b41d9b8abf9631f25af2b20b31614e8c759b5d" gracePeriod=30 Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.561265 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.566021 4909 generic.go:334] "Generic (PLEG): container finished" podID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerID="c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b" exitCode=0 Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.566109 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b88557586-pqm94" event={"ID":"c7e9a004-6dc5-44f0-9429-cca23187791b","Type":"ContainerDied","Data":"c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.566150 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b88557586-pqm94" event={"ID":"c7e9a004-6dc5-44f0-9429-cca23187791b","Type":"ContainerDied","Data":"f79563afbec79375eb8747d487492eecafad40a8e13205ac2b896f663e30e13b"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.566170 4909 scope.go:117] "RemoveContainer" containerID="c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.566327 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b88557586-pqm94" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.581490 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.8234685280000003 podStartE2EDuration="4.581471056s" podCreationTimestamp="2025-11-26 07:19:37 +0000 UTC" firstStartedPulling="2025-11-26 07:19:38.59330419 +0000 UTC m=+1150.739515356" lastFinishedPulling="2025-11-26 07:19:39.351306728 +0000 UTC m=+1151.497517884" observedRunningTime="2025-11-26 07:19:41.568666277 +0000 UTC m=+1153.714877453" watchObservedRunningTime="2025-11-26 07:19:41.581471056 +0000 UTC m=+1153.727682222" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.582089 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerID="3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" exitCode=0 Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.582991 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerID="733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" exitCode=2 Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.583200 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerID="e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" exitCode=0 Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.583467 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerID="33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" exitCode=0 Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.582305 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerDied","Data":"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.584668 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerDied","Data":"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.584701 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerDied","Data":"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.584718 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerDied","Data":"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.584730 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb","Type":"ContainerDied","Data":"a866662b6d0a61960f31b3dac81d21249d3af9cde469ddf651769b52e5de0c5c"} Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.582270 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.599725 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.599705145 podStartE2EDuration="4.599705145s" podCreationTimestamp="2025-11-26 07:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:41.588210271 +0000 UTC m=+1153.734421437" watchObservedRunningTime="2025-11-26 07:19:41.599705145 +0000 UTC m=+1153.745916311" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.614470 4909 scope.go:117] "RemoveContainer" containerID="eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.645152 4909 scope.go:117] "RemoveContainer" containerID="c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.646204 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b88557586-pqm94"] Nov 26 07:19:41 crc kubenswrapper[4909]: E1126 07:19:41.646490 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b\": container with ID starting with c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b not found: ID does not exist" containerID="c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.646524 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b"} err="failed to get container status \"c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b\": rpc error: code = NotFound desc = could not find container \"c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b\": container with ID starting with c5439798de50cf811c06ab9581e01287e7ed86a70787b82d3acbeb7756bfa22b not found: ID does not exist" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.646543 4909 scope.go:117] "RemoveContainer" containerID="eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58" Nov 26 07:19:41 crc kubenswrapper[4909]: E1126 07:19:41.646869 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58\": container with ID starting with eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58 not found: ID does not exist" containerID="eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.646908 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58"} err="failed to get container status \"eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58\": rpc error: code = NotFound desc = could not find container \"eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58\": container with ID starting with eccb04d1f46e982d1a0621c31910431032b430d7ee0ae3a1c31eb2ab44914a58 not found: ID does not exist" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.646936 4909 scope.go:117] "RemoveContainer" containerID="3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.651300 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-combined-ca-bundle\") pod \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.651375 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-sg-core-conf-yaml\") pod \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.651413 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-run-httpd\") pod \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.651480 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rct2h\" (UniqueName: \"kubernetes.io/projected/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-kube-api-access-rct2h\") pod \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.651616 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-scripts\") pod \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.651693 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-log-httpd\") pod \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.651739 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-config-data\") pod \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\" (UID: \"9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb\") " Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.652513 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" (UID: "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.653849 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" (UID: "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.655637 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b88557586-pqm94"] Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.659102 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-kube-api-access-rct2h" (OuterVolumeSpecName: "kube-api-access-rct2h") pod "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" (UID: "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb"). InnerVolumeSpecName "kube-api-access-rct2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.664089 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-scripts" (OuterVolumeSpecName: "scripts") pod "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" (UID: "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.684011 4909 scope.go:117] "RemoveContainer" containerID="733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.689828 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" (UID: "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.758352 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rct2h\" (UniqueName: \"kubernetes.io/projected/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-kube-api-access-rct2h\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.758386 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.758399 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.758431 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.758451 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.790825 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" (UID: "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.820778 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-config-data" (OuterVolumeSpecName: "config-data") pod "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" (UID: "9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.859955 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:41 crc kubenswrapper[4909]: I1126 07:19:41.859991 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.015048 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.024895 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.042963 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:42 crc kubenswrapper[4909]: E1126 07:19:42.043471 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="proxy-httpd" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043485 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="proxy-httpd" Nov 26 07:19:42 crc kubenswrapper[4909]: E1126 07:19:42.043500 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-notification-agent" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043509 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-notification-agent" Nov 26 07:19:42 crc kubenswrapper[4909]: E1126 07:19:42.043528 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="sg-core" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043538 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="sg-core" Nov 26 07:19:42 crc kubenswrapper[4909]: E1126 07:19:42.043558 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043565 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api" Nov 26 07:19:42 crc kubenswrapper[4909]: E1126 07:19:42.043606 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api-log" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043614 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api-log" Nov 26 07:19:42 crc kubenswrapper[4909]: E1126 07:19:42.043631 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-central-agent" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043638 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-central-agent" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043820 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="sg-core" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043837 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="proxy-httpd" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043844 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-central-agent" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043856 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" containerName="ceilometer-notification-agent" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043880 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.043895 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" containerName="barbican-api-log" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.045802 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.051180 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.051234 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.056427 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.165061 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-scripts\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.165111 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hgsk\" (UniqueName: \"kubernetes.io/projected/bf5c3859-9070-4d84-a4e5-812685fffc00-kube-api-access-8hgsk\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.165136 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.165159 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.165176 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.165193 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.165251 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-config-data\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.266850 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.266950 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-config-data\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.267076 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-scripts\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.267114 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hgsk\" (UniqueName: \"kubernetes.io/projected/bf5c3859-9070-4d84-a4e5-812685fffc00-kube-api-access-8hgsk\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.267141 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.267171 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.267194 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.267547 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.267870 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.280974 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-config-data\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.281110 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.281822 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-scripts\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.284322 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.286487 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hgsk\" (UniqueName: \"kubernetes.io/projected/bf5c3859-9070-4d84-a4e5-812685fffc00-kube-api-access-8hgsk\") pod \"ceilometer-0\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.374266 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.510482 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb" path="/var/lib/kubelet/pods/9f474ed6-8d7a-4fbc-98ec-7fb188efdfcb/volumes" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.511391 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e9a004-6dc5-44f0-9429-cca23187791b" path="/var/lib/kubelet/pods/c7e9a004-6dc5-44f0-9429-cca23187791b/volumes" Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.597805 4909 generic.go:334] "Generic (PLEG): container finished" podID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerID="b839c51d24d478c6f7088fd698b41d9b8abf9631f25af2b20b31614e8c759b5d" exitCode=0 Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.597852 4909 generic.go:334] "Generic (PLEG): container finished" podID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerID="23a2d29c37454e96f439b04a2b47d4669d0de0721240be366937127da50d0d42" exitCode=143 Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.597895 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503d0d1-cf4e-459a-b928-f92afd8368d5","Type":"ContainerDied","Data":"b839c51d24d478c6f7088fd698b41d9b8abf9631f25af2b20b31614e8c759b5d"} Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.597971 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503d0d1-cf4e-459a-b928-f92afd8368d5","Type":"ContainerDied","Data":"23a2d29c37454e96f439b04a2b47d4669d0de0721240be366937127da50d0d42"} Nov 26 07:19:42 crc kubenswrapper[4909]: I1126 07:19:42.959301 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 26 07:19:43 crc kubenswrapper[4909]: I1126 07:19:43.540134 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.156308 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.793919 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-v2w7l"] Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.795262 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v2w7l" Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.810933 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v2w7l"] Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.913802 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pj8dd"] Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.915391 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pj8dd" Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.923335 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pj8dd"] Nov 26 07:19:46 crc kubenswrapper[4909]: I1126 07:19:46.947786 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5fl\" (UniqueName: \"kubernetes.io/projected/18132a30-759b-445e-887c-84acbf813072-kube-api-access-9q5fl\") pod \"nova-api-db-create-v2w7l\" (UID: \"18132a30-759b-445e-887c-84acbf813072\") " pod="openstack/nova-api-db-create-v2w7l" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.025133 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7llw7"] Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.026802 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7llw7" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.037660 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7llw7"] Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.049935 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q5fl\" (UniqueName: \"kubernetes.io/projected/18132a30-759b-445e-887c-84acbf813072-kube-api-access-9q5fl\") pod \"nova-api-db-create-v2w7l\" (UID: \"18132a30-759b-445e-887c-84acbf813072\") " pod="openstack/nova-api-db-create-v2w7l" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.050026 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxll\" (UniqueName: \"kubernetes.io/projected/9d0e3b03-58b3-4ece-be71-303a24548901-kube-api-access-spxll\") pod \"nova-cell0-db-create-pj8dd\" (UID: \"9d0e3b03-58b3-4ece-be71-303a24548901\") " pod="openstack/nova-cell0-db-create-pj8dd" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.080110 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q5fl\" (UniqueName: \"kubernetes.io/projected/18132a30-759b-445e-887c-84acbf813072-kube-api-access-9q5fl\") pod \"nova-api-db-create-v2w7l\" (UID: \"18132a30-759b-445e-887c-84acbf813072\") " pod="openstack/nova-api-db-create-v2w7l" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.118007 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v2w7l" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.151832 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbz5\" (UniqueName: \"kubernetes.io/projected/f2b3c67e-9da6-4515-a39d-8b653cfb6b56-kube-api-access-5cbz5\") pod \"nova-cell1-db-create-7llw7\" (UID: \"f2b3c67e-9da6-4515-a39d-8b653cfb6b56\") " pod="openstack/nova-cell1-db-create-7llw7" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.152039 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxll\" (UniqueName: \"kubernetes.io/projected/9d0e3b03-58b3-4ece-be71-303a24548901-kube-api-access-spxll\") pod \"nova-cell0-db-create-pj8dd\" (UID: \"9d0e3b03-58b3-4ece-be71-303a24548901\") " pod="openstack/nova-cell0-db-create-pj8dd" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.168088 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxll\" (UniqueName: \"kubernetes.io/projected/9d0e3b03-58b3-4ece-be71-303a24548901-kube-api-access-spxll\") pod \"nova-cell0-db-create-pj8dd\" (UID: \"9d0e3b03-58b3-4ece-be71-303a24548901\") " pod="openstack/nova-cell0-db-create-pj8dd" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.236098 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pj8dd" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.253977 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbz5\" (UniqueName: \"kubernetes.io/projected/f2b3c67e-9da6-4515-a39d-8b653cfb6b56-kube-api-access-5cbz5\") pod \"nova-cell1-db-create-7llw7\" (UID: \"f2b3c67e-9da6-4515-a39d-8b653cfb6b56\") " pod="openstack/nova-cell1-db-create-7llw7" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.268934 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbz5\" (UniqueName: \"kubernetes.io/projected/f2b3c67e-9da6-4515-a39d-8b653cfb6b56-kube-api-access-5cbz5\") pod \"nova-cell1-db-create-7llw7\" (UID: \"f2b3c67e-9da6-4515-a39d-8b653cfb6b56\") " pod="openstack/nova-cell1-db-create-7llw7" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.345491 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7llw7" Nov 26 07:19:47 crc kubenswrapper[4909]: I1126 07:19:47.975311 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.029524 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-wth9q"] Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.029797 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" podUID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerName="dnsmasq-dns" containerID="cri-o://c5beae108eb0a8ffe39f9b2d9b009462ad169a9ad25ca5f0243d5a0ce2e29ef2" gracePeriod=10 Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.265814 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.319094 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.565399 4909 scope.go:117] "RemoveContainer" containerID="e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.738469 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503d0d1-cf4e-459a-b928-f92afd8368d5","Type":"ContainerDied","Data":"e95a03508d7e5707dafed14a4267836486a3ac980d7c4e922f64be88cb65cccb"} Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.738761 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95a03508d7e5707dafed14a4267836486a3ac980d7c4e922f64be88cb65cccb" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.755088 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.801766 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.818227 4909 generic.go:334] "Generic (PLEG): container finished" podID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerID="c5beae108eb0a8ffe39f9b2d9b009462ad169a9ad25ca5f0243d5a0ce2e29ef2" exitCode=0 Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.818415 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="cinder-scheduler" containerID="cri-o://ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d" gracePeriod=30 Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.818497 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="probe" containerID="cri-o://1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56" gracePeriod=30 Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.818499 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" event={"ID":"3ecb4448-9622-4e65-bcf6-85ed4c003817","Type":"ContainerDied","Data":"c5beae108eb0a8ffe39f9b2d9b009462ad169a9ad25ca5f0243d5a0ce2e29ef2"} Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.833222 4909 scope.go:117] "RemoveContainer" containerID="33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.884620 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503d0d1-cf4e-459a-b928-f92afd8368d5-etc-machine-id\") pod \"f503d0d1-cf4e-459a-b928-f92afd8368d5\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.884750 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data-custom\") pod \"f503d0d1-cf4e-459a-b928-f92afd8368d5\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.884798 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-scripts\") pod \"f503d0d1-cf4e-459a-b928-f92afd8368d5\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.884825 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8468q\" (UniqueName: \"kubernetes.io/projected/f503d0d1-cf4e-459a-b928-f92afd8368d5-kube-api-access-8468q\") pod \"f503d0d1-cf4e-459a-b928-f92afd8368d5\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.884848 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data\") pod \"f503d0d1-cf4e-459a-b928-f92afd8368d5\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.884873 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-combined-ca-bundle\") pod \"f503d0d1-cf4e-459a-b928-f92afd8368d5\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.884907 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503d0d1-cf4e-459a-b928-f92afd8368d5-logs\") pod \"f503d0d1-cf4e-459a-b928-f92afd8368d5\" (UID: \"f503d0d1-cf4e-459a-b928-f92afd8368d5\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.885681 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f503d0d1-cf4e-459a-b928-f92afd8368d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f503d0d1-cf4e-459a-b928-f92afd8368d5" (UID: "f503d0d1-cf4e-459a-b928-f92afd8368d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.885934 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d655886f6-h56wz"] Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.886110 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d655886f6-h56wz" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-api" containerID="cri-o://f830e018627073977f605e520fbf64ada9095f6bb653e33ef0ca390f3eb5fabe" gracePeriod=30 Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.888708 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d655886f6-h56wz" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-httpd" containerID="cri-o://6ce5bc27dbcd8bc437bbc74ad6462b2ac8d4570a131a3d43b4e39c235a6f2b13" gracePeriod=30 Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.889220 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f503d0d1-cf4e-459a-b928-f92afd8368d5-logs" (OuterVolumeSpecName: "logs") pod "f503d0d1-cf4e-459a-b928-f92afd8368d5" (UID: "f503d0d1-cf4e-459a-b928-f92afd8368d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.891851 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-scripts" (OuterVolumeSpecName: "scripts") pod "f503d0d1-cf4e-459a-b928-f92afd8368d5" (UID: "f503d0d1-cf4e-459a-b928-f92afd8368d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.894500 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f503d0d1-cf4e-459a-b928-f92afd8368d5" (UID: "f503d0d1-cf4e-459a-b928-f92afd8368d5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.895770 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f503d0d1-cf4e-459a-b928-f92afd8368d5-kube-api-access-8468q" (OuterVolumeSpecName: "kube-api-access-8468q") pod "f503d0d1-cf4e-459a-b928-f92afd8368d5" (UID: "f503d0d1-cf4e-459a-b928-f92afd8368d5"). InnerVolumeSpecName "kube-api-access-8468q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.912792 4909 scope.go:117] "RemoveContainer" containerID="3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" Nov 26 07:19:48 crc kubenswrapper[4909]: E1126 07:19:48.916161 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": container with ID starting with 3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e not found: ID does not exist" containerID="3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.916204 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e"} err="failed to get container status \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": rpc error: code = NotFound desc = could not find container \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": container with ID starting with 3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.916225 4909 scope.go:117] "RemoveContainer" containerID="733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" Nov 26 07:19:48 crc kubenswrapper[4909]: E1126 07:19:48.917110 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": container with ID starting with 733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf not found: ID does not exist" containerID="733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.917154 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf"} err="failed to get container status \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": rpc error: code = NotFound desc = could not find container \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": container with ID starting with 733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.917180 4909 scope.go:117] "RemoveContainer" containerID="e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" Nov 26 07:19:48 crc kubenswrapper[4909]: E1126 07:19:48.917572 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": container with ID starting with e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9 not found: ID does not exist" containerID="e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.917631 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9"} err="failed to get container status \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": rpc error: code = NotFound desc = could not find container \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": container with ID starting with e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.917647 4909 scope.go:117] "RemoveContainer" containerID="33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" Nov 26 07:19:48 crc kubenswrapper[4909]: E1126 07:19:48.918009 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": container with ID starting with 33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0 not found: ID does not exist" containerID="33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918034 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0"} err="failed to get container status \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": rpc error: code = NotFound desc = could not find container \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": container with ID starting with 33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918056 4909 scope.go:117] "RemoveContainer" containerID="3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918247 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e"} err="failed to get container status \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": rpc error: code = NotFound desc = could not find container \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": container with ID starting with 3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918266 4909 scope.go:117] "RemoveContainer" containerID="733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918440 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf"} err="failed to get container status \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": rpc error: code = NotFound desc = could not find container \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": container with ID starting with 733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918456 4909 scope.go:117] "RemoveContainer" containerID="e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918784 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9"} err="failed to get container status \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": rpc error: code = NotFound desc = could not find container \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": container with ID starting with e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.918814 4909 scope.go:117] "RemoveContainer" containerID="33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919000 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0"} err="failed to get container status \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": rpc error: code = NotFound desc = could not find container \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": container with ID starting with 33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919016 4909 scope.go:117] "RemoveContainer" containerID="3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919223 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e"} err="failed to get container status \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": rpc error: code = NotFound desc = could not find container \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": container with ID starting with 3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919240 4909 scope.go:117] "RemoveContainer" containerID="733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919388 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf"} err="failed to get container status \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": rpc error: code = NotFound desc = could not find container \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": container with ID starting with 733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919405 4909 scope.go:117] "RemoveContainer" containerID="e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919535 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9"} err="failed to get container status \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": rpc error: code = NotFound desc = could not find container \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": container with ID starting with e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919552 4909 scope.go:117] "RemoveContainer" containerID="33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919825 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f503d0d1-cf4e-459a-b928-f92afd8368d5" (UID: "f503d0d1-cf4e-459a-b928-f92afd8368d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919867 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0"} err="failed to get container status \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": rpc error: code = NotFound desc = could not find container \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": container with ID starting with 33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.919884 4909 scope.go:117] "RemoveContainer" containerID="3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.920085 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e"} err="failed to get container status \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": rpc error: code = NotFound desc = could not find container \"3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e\": container with ID starting with 3b86805972c5164486632fd12ae7bf7c1714fe2f8ed51af00cbd30cf40f3e95e not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.920105 4909 scope.go:117] "RemoveContainer" containerID="733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.920321 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf"} err="failed to get container status \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": rpc error: code = NotFound desc = could not find container \"733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf\": container with ID starting with 733a1286f2d0a166ce55d0a053ad52d71a18ef5c50df0a5747a307a1f0207bbf not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.920364 4909 scope.go:117] "RemoveContainer" containerID="e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.920561 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9"} err="failed to get container status \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": rpc error: code = NotFound desc = could not find container \"e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9\": container with ID starting with e50a3dad521692a9268d6d008e3fae19abc71dce39777a3e9bfc2704b8627cf9 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.920577 4909 scope.go:117] "RemoveContainer" containerID="33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.921180 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0"} err="failed to get container status \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": rpc error: code = NotFound desc = could not find container \"33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0\": container with ID starting with 33c07c099ed73cd454fcb8f774fa77042f8c4d7d5749a56e24304229815be4d0 not found: ID does not exist" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.935718 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.989121 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-config\") pod \"3ecb4448-9622-4e65-bcf6-85ed4c003817\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.989277 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-sb\") pod \"3ecb4448-9622-4e65-bcf6-85ed4c003817\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.989388 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k6hr\" (UniqueName: \"kubernetes.io/projected/3ecb4448-9622-4e65-bcf6-85ed4c003817-kube-api-access-7k6hr\") pod \"3ecb4448-9622-4e65-bcf6-85ed4c003817\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.989427 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-svc\") pod \"3ecb4448-9622-4e65-bcf6-85ed4c003817\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.989471 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-nb\") pod \"3ecb4448-9622-4e65-bcf6-85ed4c003817\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.989496 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-swift-storage-0\") pod \"3ecb4448-9622-4e65-bcf6-85ed4c003817\" (UID: \"3ecb4448-9622-4e65-bcf6-85ed4c003817\") " Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.990251 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.990287 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.990296 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8468q\" (UniqueName: \"kubernetes.io/projected/f503d0d1-cf4e-459a-b928-f92afd8368d5-kube-api-access-8468q\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.990306 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.990316 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503d0d1-cf4e-459a-b928-f92afd8368d5-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.990324 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503d0d1-cf4e-459a-b928-f92afd8368d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:48 crc kubenswrapper[4909]: I1126 07:19:48.995795 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecb4448-9622-4e65-bcf6-85ed4c003817-kube-api-access-7k6hr" (OuterVolumeSpecName: "kube-api-access-7k6hr") pod "3ecb4448-9622-4e65-bcf6-85ed4c003817" (UID: "3ecb4448-9622-4e65-bcf6-85ed4c003817"). InnerVolumeSpecName "kube-api-access-7k6hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.093511 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k6hr\" (UniqueName: \"kubernetes.io/projected/3ecb4448-9622-4e65-bcf6-85ed4c003817-kube-api-access-7k6hr\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.210535 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data" (OuterVolumeSpecName: "config-data") pod "f503d0d1-cf4e-459a-b928-f92afd8368d5" (UID: "f503d0d1-cf4e-459a-b928-f92afd8368d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.222183 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-config" (OuterVolumeSpecName: "config") pod "3ecb4448-9622-4e65-bcf6-85ed4c003817" (UID: "3ecb4448-9622-4e65-bcf6-85ed4c003817"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.232848 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.251189 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ecb4448-9622-4e65-bcf6-85ed4c003817" (UID: "3ecb4448-9622-4e65-bcf6-85ed4c003817"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.262080 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ecb4448-9622-4e65-bcf6-85ed4c003817" (UID: "3ecb4448-9622-4e65-bcf6-85ed4c003817"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.276248 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ecb4448-9622-4e65-bcf6-85ed4c003817" (UID: "3ecb4448-9622-4e65-bcf6-85ed4c003817"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.278512 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ecb4448-9622-4e65-bcf6-85ed4c003817" (UID: "3ecb4448-9622-4e65-bcf6-85ed4c003817"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.299226 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503d0d1-cf4e-459a-b928-f92afd8368d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.299256 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.299265 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.299275 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.299284 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.299294 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ecb4448-9622-4e65-bcf6-85ed4c003817-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.564933 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pj8dd"] Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.573476 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v2w7l"] Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.600266 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7llw7"] Nov 26 07:19:49 crc kubenswrapper[4909]: W1126 07:19:49.611501 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2b3c67e_9da6_4515_a39d_8b653cfb6b56.slice/crio-188ffb49f2607ad58cb9ed7c90698fdd0aaa96d30291b4bdaac9850a748e2b69 WatchSource:0}: Error finding container 188ffb49f2607ad58cb9ed7c90698fdd0aaa96d30291b4bdaac9850a748e2b69: Status 404 returned error can't find the container with id 188ffb49f2607ad58cb9ed7c90698fdd0aaa96d30291b4bdaac9850a748e2b69 Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.835948 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pj8dd" event={"ID":"9d0e3b03-58b3-4ece-be71-303a24548901","Type":"ContainerStarted","Data":"b26a2b61ed6a70a161de450851da26efccdc3f1e4750fbfa8b40d4d680489caa"} Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.837484 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerStarted","Data":"47c259e965e6672fc98c2329257f52a3b79a7f7fa56ad32f8777be4502c38fcc"} Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.839290 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7","Type":"ContainerStarted","Data":"5490bde309c9533492858121c3c3979518f2ae4d1909c148a36b275cb690a58a"} Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.848400 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" event={"ID":"3ecb4448-9622-4e65-bcf6-85ed4c003817","Type":"ContainerDied","Data":"64dd36ae010e23c6fabcc43356b79fee22bc72bdae8fffaa553fe9c28e261492"} Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.848441 4909 scope.go:117] "RemoveContainer" containerID="c5beae108eb0a8ffe39f9b2d9b009462ad169a9ad25ca5f0243d5a0ce2e29ef2" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.848530 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-wth9q" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.862999 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.735741742 podStartE2EDuration="20.862980907s" podCreationTimestamp="2025-11-26 07:19:29 +0000 UTC" firstStartedPulling="2025-11-26 07:19:30.537850045 +0000 UTC m=+1142.684061211" lastFinishedPulling="2025-11-26 07:19:48.66508921 +0000 UTC m=+1160.811300376" observedRunningTime="2025-11-26 07:19:49.862109762 +0000 UTC m=+1162.008320928" watchObservedRunningTime="2025-11-26 07:19:49.862980907 +0000 UTC m=+1162.009192073" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.865824 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v2w7l" event={"ID":"18132a30-759b-445e-887c-84acbf813072","Type":"ContainerStarted","Data":"e606943b830d8a7aa77bb632d1a6c84d3ce283950aa7dfc984a452adad99f786"} Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.881945 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7llw7" event={"ID":"f2b3c67e-9da6-4515-a39d-8b653cfb6b56","Type":"ContainerStarted","Data":"188ffb49f2607ad58cb9ed7c90698fdd0aaa96d30291b4bdaac9850a748e2b69"} Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.883137 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-wth9q"] Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.885664 4909 generic.go:334] "Generic (PLEG): container finished" podID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerID="6ce5bc27dbcd8bc437bbc74ad6462b2ac8d4570a131a3d43b4e39c235a6f2b13" exitCode=0 Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.885756 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.885747 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d655886f6-h56wz" event={"ID":"7aa7dca9-3bc0-4869-b69a-f2bbf2190038","Type":"ContainerDied","Data":"6ce5bc27dbcd8bc437bbc74ad6462b2ac8d4570a131a3d43b4e39c235a6f2b13"} Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.891021 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-wth9q"] Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.911470 4909 scope.go:117] "RemoveContainer" containerID="96bf4023b231a1737ff7bdbef43367ddd232cbf376ab215cd6a62c332ffc23e4" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.964082 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.985654 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.993501 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:49 crc kubenswrapper[4909]: E1126 07:19:49.994021 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.994045 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api" Nov 26 07:19:49 crc kubenswrapper[4909]: E1126 07:19:49.994057 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerName="dnsmasq-dns" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.994064 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerName="dnsmasq-dns" Nov 26 07:19:49 crc kubenswrapper[4909]: E1126 07:19:49.994083 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerName="init" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.994091 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerName="init" Nov 26 07:19:49 crc kubenswrapper[4909]: E1126 07:19:49.994118 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api-log" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.994126 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api-log" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.994366 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecb4448-9622-4e65-bcf6-85ed4c003817" containerName="dnsmasq-dns" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.994398 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.994411 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api-log" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.995708 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:19:49 crc kubenswrapper[4909]: I1126 07:19:49.999136 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.001075 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.001160 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.001349 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020529 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020615 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07095ffe-adde-4857-93db-5a02f0adf9e6-logs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020666 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020691 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07095ffe-adde-4857-93db-5a02f0adf9e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020711 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020750 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czxx\" (UniqueName: \"kubernetes.io/projected/07095ffe-adde-4857-93db-5a02f0adf9e6-kube-api-access-8czxx\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020785 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020832 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.020860 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122308 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122360 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122449 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122473 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07095ffe-adde-4857-93db-5a02f0adf9e6-logs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122500 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122529 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07095ffe-adde-4857-93db-5a02f0adf9e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122547 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122604 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8czxx\" (UniqueName: \"kubernetes.io/projected/07095ffe-adde-4857-93db-5a02f0adf9e6-kube-api-access-8czxx\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.122638 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.123319 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07095ffe-adde-4857-93db-5a02f0adf9e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.125766 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07095ffe-adde-4857-93db-5a02f0adf9e6-logs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.129862 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.129878 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.130650 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.135248 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.135733 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.143176 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.146134 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8czxx\" (UniqueName: \"kubernetes.io/projected/07095ffe-adde-4857-93db-5a02f0adf9e6-kube-api-access-8czxx\") pod \"cinder-api-0\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.322344 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.511794 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecb4448-9622-4e65-bcf6-85ed4c003817" path="/var/lib/kubelet/pods/3ecb4448-9622-4e65-bcf6-85ed4c003817/volumes" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.512603 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" path="/var/lib/kubelet/pods/f503d0d1-cf4e-459a-b928-f92afd8368d5/volumes" Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.829221 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.901152 4909 generic.go:334] "Generic (PLEG): container finished" podID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerID="1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56" exitCode=0 Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.901248 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2aec08aa-41f2-437a-8b51-a4d065dc4856","Type":"ContainerDied","Data":"1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56"} Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.904900 4909 generic.go:334] "Generic (PLEG): container finished" podID="f2b3c67e-9da6-4515-a39d-8b653cfb6b56" containerID="30e361990988bd171e5232954c25f4cf39111c82e9b5e67eeac5bf9805a9c92f" exitCode=0 Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.904949 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7llw7" event={"ID":"f2b3c67e-9da6-4515-a39d-8b653cfb6b56","Type":"ContainerDied","Data":"30e361990988bd171e5232954c25f4cf39111c82e9b5e67eeac5bf9805a9c92f"} Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.907348 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07095ffe-adde-4857-93db-5a02f0adf9e6","Type":"ContainerStarted","Data":"5d8cfe480952ccec6c0dc1f80befee80044f33aba629dc5a87631947666ca62b"} Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.910438 4909 generic.go:334] "Generic (PLEG): container finished" podID="9d0e3b03-58b3-4ece-be71-303a24548901" containerID="25fec5357f4884a3d354306d4daf24db700a915aa9ded27e5750b66f49b50a46" exitCode=0 Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.910488 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pj8dd" event={"ID":"9d0e3b03-58b3-4ece-be71-303a24548901","Type":"ContainerDied","Data":"25fec5357f4884a3d354306d4daf24db700a915aa9ded27e5750b66f49b50a46"} Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.912953 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerStarted","Data":"c5eac93dfa6bafbb7b87f1f13352e7d3bbd9ac620b77b706f39f9b353c880724"} Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.912980 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerStarted","Data":"6f3b465c06d0438f05c6c8056b6902e2c9b3c3f4204c5ba55c56d8762970d16f"} Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.917806 4909 generic.go:334] "Generic (PLEG): container finished" podID="18132a30-759b-445e-887c-84acbf813072" containerID="2b21f4c34eb3dd787a432ad7f9eb9ad7e33cb5d88aafed3af5d2418fd9fff5b2" exitCode=0 Nov 26 07:19:50 crc kubenswrapper[4909]: I1126 07:19:50.918495 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v2w7l" event={"ID":"18132a30-759b-445e-887c-84acbf813072","Type":"ContainerDied","Data":"2b21f4c34eb3dd787a432ad7f9eb9ad7e33cb5d88aafed3af5d2418fd9fff5b2"} Nov 26 07:19:51 crc kubenswrapper[4909]: I1126 07:19:51.931524 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07095ffe-adde-4857-93db-5a02f0adf9e6","Type":"ContainerStarted","Data":"ffdcc38e7deac196a6a6dc47ac259ef1b3c1eaff9265239fbcdfb5425c3fe186"} Nov 26 07:19:51 crc kubenswrapper[4909]: I1126 07:19:51.934068 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerStarted","Data":"cb4c1981624e477ce4ae1aef40310aa655b3d7f88537c47963173cd5ed7f3715"} Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.403377 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pj8dd" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.574266 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxll\" (UniqueName: \"kubernetes.io/projected/9d0e3b03-58b3-4ece-be71-303a24548901-kube-api-access-spxll\") pod \"9d0e3b03-58b3-4ece-be71-303a24548901\" (UID: \"9d0e3b03-58b3-4ece-be71-303a24548901\") " Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.574837 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7llw7" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.594833 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0e3b03-58b3-4ece-be71-303a24548901-kube-api-access-spxll" (OuterVolumeSpecName: "kube-api-access-spxll") pod "9d0e3b03-58b3-4ece-be71-303a24548901" (UID: "9d0e3b03-58b3-4ece-be71-303a24548901"). InnerVolumeSpecName "kube-api-access-spxll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.602906 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v2w7l" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.676176 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cbz5\" (UniqueName: \"kubernetes.io/projected/f2b3c67e-9da6-4515-a39d-8b653cfb6b56-kube-api-access-5cbz5\") pod \"f2b3c67e-9da6-4515-a39d-8b653cfb6b56\" (UID: \"f2b3c67e-9da6-4515-a39d-8b653cfb6b56\") " Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.677020 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxll\" (UniqueName: \"kubernetes.io/projected/9d0e3b03-58b3-4ece-be71-303a24548901-kube-api-access-spxll\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.706847 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b3c67e-9da6-4515-a39d-8b653cfb6b56-kube-api-access-5cbz5" (OuterVolumeSpecName: "kube-api-access-5cbz5") pod "f2b3c67e-9da6-4515-a39d-8b653cfb6b56" (UID: "f2b3c67e-9da6-4515-a39d-8b653cfb6b56"). InnerVolumeSpecName "kube-api-access-5cbz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.778446 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q5fl\" (UniqueName: \"kubernetes.io/projected/18132a30-759b-445e-887c-84acbf813072-kube-api-access-9q5fl\") pod \"18132a30-759b-445e-887c-84acbf813072\" (UID: \"18132a30-759b-445e-887c-84acbf813072\") " Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.779440 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cbz5\" (UniqueName: \"kubernetes.io/projected/f2b3c67e-9da6-4515-a39d-8b653cfb6b56-kube-api-access-5cbz5\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.787750 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18132a30-759b-445e-887c-84acbf813072-kube-api-access-9q5fl" (OuterVolumeSpecName: "kube-api-access-9q5fl") pod "18132a30-759b-445e-887c-84acbf813072" (UID: "18132a30-759b-445e-887c-84acbf813072"). InnerVolumeSpecName "kube-api-access-9q5fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.880681 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q5fl\" (UniqueName: \"kubernetes.io/projected/18132a30-759b-445e-887c-84acbf813072-kube-api-access-9q5fl\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.971379 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7llw7" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.972806 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7llw7" event={"ID":"f2b3c67e-9da6-4515-a39d-8b653cfb6b56","Type":"ContainerDied","Data":"188ffb49f2607ad58cb9ed7c90698fdd0aaa96d30291b4bdaac9850a748e2b69"} Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.972882 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188ffb49f2607ad58cb9ed7c90698fdd0aaa96d30291b4bdaac9850a748e2b69" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.986551 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07095ffe-adde-4857-93db-5a02f0adf9e6","Type":"ContainerStarted","Data":"3e62a202acc19dddd034b5dca03867a48ca15be9ff76077f42a4246722cebcdf"} Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.987532 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.996737 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pj8dd" event={"ID":"9d0e3b03-58b3-4ece-be71-303a24548901","Type":"ContainerDied","Data":"b26a2b61ed6a70a161de450851da26efccdc3f1e4750fbfa8b40d4d680489caa"} Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.996769 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26a2b61ed6a70a161de450851da26efccdc3f1e4750fbfa8b40d4d680489caa" Nov 26 07:19:52 crc kubenswrapper[4909]: I1126 07:19:52.996819 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pj8dd" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.012690 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerStarted","Data":"f505158c7d97278fe3ec47014d4b2cdb3900753a630192f30d6634c373417a3c"} Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.012849 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-central-agent" containerID="cri-o://6f3b465c06d0438f05c6c8056b6902e2c9b3c3f4204c5ba55c56d8762970d16f" gracePeriod=30 Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.012895 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.012919 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="proxy-httpd" containerID="cri-o://f505158c7d97278fe3ec47014d4b2cdb3900753a630192f30d6634c373417a3c" gracePeriod=30 Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.012954 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-notification-agent" containerID="cri-o://c5eac93dfa6bafbb7b87f1f13352e7d3bbd9ac620b77b706f39f9b353c880724" gracePeriod=30 Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.013020 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="sg-core" containerID="cri-o://cb4c1981624e477ce4ae1aef40310aa655b3d7f88537c47963173cd5ed7f3715" gracePeriod=30 Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.022183 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v2w7l" event={"ID":"18132a30-759b-445e-887c-84acbf813072","Type":"ContainerDied","Data":"e606943b830d8a7aa77bb632d1a6c84d3ce283950aa7dfc984a452adad99f786"} Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.022221 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e606943b830d8a7aa77bb632d1a6c84d3ce283950aa7dfc984a452adad99f786" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.022275 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v2w7l" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.031178 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.031156517 podStartE2EDuration="4.031156517s" podCreationTimestamp="2025-11-26 07:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:53.02161005 +0000 UTC m=+1165.167821236" watchObservedRunningTime="2025-11-26 07:19:53.031156517 +0000 UTC m=+1165.177367683" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.070061 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.881393102 podStartE2EDuration="11.070035575s" podCreationTimestamp="2025-11-26 07:19:42 +0000 UTC" firstStartedPulling="2025-11-26 07:19:49.263310744 +0000 UTC m=+1161.409521910" lastFinishedPulling="2025-11-26 07:19:52.451953217 +0000 UTC m=+1164.598164383" observedRunningTime="2025-11-26 07:19:53.053950985 +0000 UTC m=+1165.200162151" watchObservedRunningTime="2025-11-26 07:19:53.070035575 +0000 UTC m=+1165.216246751" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.321483 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="f503d0d1-cf4e-459a-b928-f92afd8368d5" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.167:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.453476 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.591900 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-combined-ca-bundle\") pod \"2aec08aa-41f2-437a-8b51-a4d065dc4856\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.592023 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlkdx\" (UniqueName: \"kubernetes.io/projected/2aec08aa-41f2-437a-8b51-a4d065dc4856-kube-api-access-mlkdx\") pod \"2aec08aa-41f2-437a-8b51-a4d065dc4856\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.592100 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2aec08aa-41f2-437a-8b51-a4d065dc4856-etc-machine-id\") pod \"2aec08aa-41f2-437a-8b51-a4d065dc4856\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.592149 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-scripts\") pod \"2aec08aa-41f2-437a-8b51-a4d065dc4856\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.592168 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data\") pod \"2aec08aa-41f2-437a-8b51-a4d065dc4856\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.592186 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2aec08aa-41f2-437a-8b51-a4d065dc4856-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2aec08aa-41f2-437a-8b51-a4d065dc4856" (UID: "2aec08aa-41f2-437a-8b51-a4d065dc4856"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.592241 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data-custom\") pod \"2aec08aa-41f2-437a-8b51-a4d065dc4856\" (UID: \"2aec08aa-41f2-437a-8b51-a4d065dc4856\") " Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.592846 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2aec08aa-41f2-437a-8b51-a4d065dc4856-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.596570 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aec08aa-41f2-437a-8b51-a4d065dc4856-kube-api-access-mlkdx" (OuterVolumeSpecName: "kube-api-access-mlkdx") pod "2aec08aa-41f2-437a-8b51-a4d065dc4856" (UID: "2aec08aa-41f2-437a-8b51-a4d065dc4856"). InnerVolumeSpecName "kube-api-access-mlkdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.598732 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2aec08aa-41f2-437a-8b51-a4d065dc4856" (UID: "2aec08aa-41f2-437a-8b51-a4d065dc4856"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.606800 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-scripts" (OuterVolumeSpecName: "scripts") pod "2aec08aa-41f2-437a-8b51-a4d065dc4856" (UID: "2aec08aa-41f2-437a-8b51-a4d065dc4856"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.694200 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlkdx\" (UniqueName: \"kubernetes.io/projected/2aec08aa-41f2-437a-8b51-a4d065dc4856-kube-api-access-mlkdx\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.694230 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.694240 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.701092 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aec08aa-41f2-437a-8b51-a4d065dc4856" (UID: "2aec08aa-41f2-437a-8b51-a4d065dc4856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.716121 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data" (OuterVolumeSpecName: "config-data") pod "2aec08aa-41f2-437a-8b51-a4d065dc4856" (UID: "2aec08aa-41f2-437a-8b51-a4d065dc4856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.795828 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:53 crc kubenswrapper[4909]: I1126 07:19:53.795866 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aec08aa-41f2-437a-8b51-a4d065dc4856-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.066995 4909 generic.go:334] "Generic (PLEG): container finished" podID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerID="ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d" exitCode=0 Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.067249 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2aec08aa-41f2-437a-8b51-a4d065dc4856","Type":"ContainerDied","Data":"ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d"} Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.067291 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2aec08aa-41f2-437a-8b51-a4d065dc4856","Type":"ContainerDied","Data":"ae755e468f645f767afab399f2eddb1cbec667d417e94321b4d7c2df67169642"} Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.067319 4909 scope.go:117] "RemoveContainer" containerID="1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.067210 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.077954 4909 generic.go:334] "Generic (PLEG): container finished" podID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerID="f505158c7d97278fe3ec47014d4b2cdb3900753a630192f30d6634c373417a3c" exitCode=0 Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.077987 4909 generic.go:334] "Generic (PLEG): container finished" podID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerID="cb4c1981624e477ce4ae1aef40310aa655b3d7f88537c47963173cd5ed7f3715" exitCode=2 Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.077996 4909 generic.go:334] "Generic (PLEG): container finished" podID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerID="c5eac93dfa6bafbb7b87f1f13352e7d3bbd9ac620b77b706f39f9b353c880724" exitCode=0 Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.078970 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerDied","Data":"f505158c7d97278fe3ec47014d4b2cdb3900753a630192f30d6634c373417a3c"} Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.079010 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerDied","Data":"cb4c1981624e477ce4ae1aef40310aa655b3d7f88537c47963173cd5ed7f3715"} Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.079025 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerDied","Data":"c5eac93dfa6bafbb7b87f1f13352e7d3bbd9ac620b77b706f39f9b353c880724"} Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.093714 4909 scope.go:117] "RemoveContainer" containerID="ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.116017 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.122470 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.124160 4909 scope.go:117] "RemoveContainer" containerID="1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56" Nov 26 07:19:54 crc kubenswrapper[4909]: E1126 07:19:54.124521 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56\": container with ID starting with 1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56 not found: ID does not exist" containerID="1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.124549 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56"} err="failed to get container status \"1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56\": rpc error: code = NotFound desc = could not find container \"1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56\": container with ID starting with 1a4cb34bbafc520b2660b7e9321ef18d7b0324a51e9d3ffd020747e713017d56 not found: ID does not exist" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.124569 4909 scope.go:117] "RemoveContainer" containerID="ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d" Nov 26 07:19:54 crc kubenswrapper[4909]: E1126 07:19:54.127498 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d\": container with ID starting with ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d not found: ID does not exist" containerID="ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.127542 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d"} err="failed to get container status \"ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d\": rpc error: code = NotFound desc = could not find container \"ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d\": container with ID starting with ff0beb292bc21aae2355433f19e011fd28ed045c8a17911687d392581c006c8d not found: ID does not exist" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.149407 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:54 crc kubenswrapper[4909]: E1126 07:19:54.149838 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18132a30-759b-445e-887c-84acbf813072" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.149858 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="18132a30-759b-445e-887c-84acbf813072" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: E1126 07:19:54.149893 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="probe" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.149901 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="probe" Nov 26 07:19:54 crc kubenswrapper[4909]: E1126 07:19:54.149913 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b3c67e-9da6-4515-a39d-8b653cfb6b56" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.149918 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b3c67e-9da6-4515-a39d-8b653cfb6b56" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: E1126 07:19:54.149936 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="cinder-scheduler" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.149941 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="cinder-scheduler" Nov 26 07:19:54 crc kubenswrapper[4909]: E1126 07:19:54.149949 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0e3b03-58b3-4ece-be71-303a24548901" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.149955 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0e3b03-58b3-4ece-be71-303a24548901" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.150111 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="probe" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.150122 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" containerName="cinder-scheduler" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.150137 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b3c67e-9da6-4515-a39d-8b653cfb6b56" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.150146 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="18132a30-759b-445e-887c-84acbf813072" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.150158 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0e3b03-58b3-4ece-be71-303a24548901" containerName="mariadb-database-create" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.151075 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.157819 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.179048 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.306245 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.306492 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbdw\" (UniqueName: \"kubernetes.io/projected/edba305d-f8e6-4ab0-ae68-30b668037813-kube-api-access-zrbdw\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.306709 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.306780 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-scripts\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.306806 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.306928 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edba305d-f8e6-4ab0-ae68-30b668037813-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.408540 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.408695 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-scripts\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.408721 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.408974 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edba305d-f8e6-4ab0-ae68-30b668037813-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.409457 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edba305d-f8e6-4ab0-ae68-30b668037813-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.409513 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.409617 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbdw\" (UniqueName: \"kubernetes.io/projected/edba305d-f8e6-4ab0-ae68-30b668037813-kube-api-access-zrbdw\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.414652 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-scripts\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.414812 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.415390 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.416125 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.430004 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbdw\" (UniqueName: \"kubernetes.io/projected/edba305d-f8e6-4ab0-ae68-30b668037813-kube-api-access-zrbdw\") pod \"cinder-scheduler-0\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.507847 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aec08aa-41f2-437a-8b51-a4d065dc4856" path="/var/lib/kubelet/pods/2aec08aa-41f2-437a-8b51-a4d065dc4856/volumes" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.511279 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:19:54 crc kubenswrapper[4909]: I1126 07:19:54.969813 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:19:54 crc kubenswrapper[4909]: W1126 07:19:54.972797 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedba305d_f8e6_4ab0_ae68_30b668037813.slice/crio-5e22ed2f01780cbace706870709638e27f61418d8b9438137022b70f7bab9af6 WatchSource:0}: Error finding container 5e22ed2f01780cbace706870709638e27f61418d8b9438137022b70f7bab9af6: Status 404 returned error can't find the container with id 5e22ed2f01780cbace706870709638e27f61418d8b9438137022b70f7bab9af6 Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.093394 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edba305d-f8e6-4ab0-ae68-30b668037813","Type":"ContainerStarted","Data":"5e22ed2f01780cbace706870709638e27f61418d8b9438137022b70f7bab9af6"} Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.100182 4909 generic.go:334] "Generic (PLEG): container finished" podID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerID="6f3b465c06d0438f05c6c8056b6902e2c9b3c3f4204c5ba55c56d8762970d16f" exitCode=0 Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.100235 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerDied","Data":"6f3b465c06d0438f05c6c8056b6902e2c9b3c3f4204c5ba55c56d8762970d16f"} Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.367109 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.528367 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-scripts\") pod \"bf5c3859-9070-4d84-a4e5-812685fffc00\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.529435 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-config-data\") pod \"bf5c3859-9070-4d84-a4e5-812685fffc00\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.529517 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hgsk\" (UniqueName: \"kubernetes.io/projected/bf5c3859-9070-4d84-a4e5-812685fffc00-kube-api-access-8hgsk\") pod \"bf5c3859-9070-4d84-a4e5-812685fffc00\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.529562 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-log-httpd\") pod \"bf5c3859-9070-4d84-a4e5-812685fffc00\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.529606 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-sg-core-conf-yaml\") pod \"bf5c3859-9070-4d84-a4e5-812685fffc00\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.529722 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-combined-ca-bundle\") pod \"bf5c3859-9070-4d84-a4e5-812685fffc00\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.529787 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-run-httpd\") pod \"bf5c3859-9070-4d84-a4e5-812685fffc00\" (UID: \"bf5c3859-9070-4d84-a4e5-812685fffc00\") " Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.530198 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf5c3859-9070-4d84-a4e5-812685fffc00" (UID: "bf5c3859-9070-4d84-a4e5-812685fffc00"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.530407 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf5c3859-9070-4d84-a4e5-812685fffc00" (UID: "bf5c3859-9070-4d84-a4e5-812685fffc00"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.530678 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.530703 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5c3859-9070-4d84-a4e5-812685fffc00-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.534527 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5c3859-9070-4d84-a4e5-812685fffc00-kube-api-access-8hgsk" (OuterVolumeSpecName: "kube-api-access-8hgsk") pod "bf5c3859-9070-4d84-a4e5-812685fffc00" (UID: "bf5c3859-9070-4d84-a4e5-812685fffc00"). InnerVolumeSpecName "kube-api-access-8hgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.534605 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-scripts" (OuterVolumeSpecName: "scripts") pod "bf5c3859-9070-4d84-a4e5-812685fffc00" (UID: "bf5c3859-9070-4d84-a4e5-812685fffc00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.562796 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bf5c3859-9070-4d84-a4e5-812685fffc00" (UID: "bf5c3859-9070-4d84-a4e5-812685fffc00"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.621521 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf5c3859-9070-4d84-a4e5-812685fffc00" (UID: "bf5c3859-9070-4d84-a4e5-812685fffc00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.632629 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.632666 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hgsk\" (UniqueName: \"kubernetes.io/projected/bf5c3859-9070-4d84-a4e5-812685fffc00-kube-api-access-8hgsk\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.632681 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.632692 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.671537 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-config-data" (OuterVolumeSpecName: "config-data") pod "bf5c3859-9070-4d84-a4e5-812685fffc00" (UID: "bf5c3859-9070-4d84-a4e5-812685fffc00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:19:55 crc kubenswrapper[4909]: I1126 07:19:55.734823 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5c3859-9070-4d84-a4e5-812685fffc00-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.113242 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5c3859-9070-4d84-a4e5-812685fffc00","Type":"ContainerDied","Data":"47c259e965e6672fc98c2329257f52a3b79a7f7fa56ad32f8777be4502c38fcc"} Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.113285 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.113318 4909 scope.go:117] "RemoveContainer" containerID="f505158c7d97278fe3ec47014d4b2cdb3900753a630192f30d6634c373417a3c" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.116311 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edba305d-f8e6-4ab0-ae68-30b668037813","Type":"ContainerStarted","Data":"83f2fa0df126cd84a93da94a384252310d087fec1c7f6c1abf2c21ba3382de98"} Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.146078 4909 scope.go:117] "RemoveContainer" containerID="cb4c1981624e477ce4ae1aef40310aa655b3d7f88537c47963173cd5ed7f3715" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.161816 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.173454 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186244 4909 scope.go:117] "RemoveContainer" containerID="c5eac93dfa6bafbb7b87f1f13352e7d3bbd9ac620b77b706f39f9b353c880724" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186359 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:56 crc kubenswrapper[4909]: E1126 07:19:56.186695 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="sg-core" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186711 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="sg-core" Nov 26 07:19:56 crc kubenswrapper[4909]: E1126 07:19:56.186742 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="proxy-httpd" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186748 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="proxy-httpd" Nov 26 07:19:56 crc kubenswrapper[4909]: E1126 07:19:56.186759 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-notification-agent" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186765 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-notification-agent" Nov 26 07:19:56 crc kubenswrapper[4909]: E1126 07:19:56.186779 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-central-agent" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186785 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-central-agent" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186935 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-notification-agent" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186958 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="ceilometer-central-agent" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186965 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="sg-core" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.186975 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" containerName="proxy-httpd" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.188411 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.191497 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.191681 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.211776 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.262678 4909 scope.go:117] "RemoveContainer" containerID="6f3b465c06d0438f05c6c8056b6902e2c9b3c3f4204c5ba55c56d8762970d16f" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.349530 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.349726 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-run-httpd\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.349782 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-scripts\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.349868 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-config-data\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.350107 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4hh\" (UniqueName: \"kubernetes.io/projected/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-kube-api-access-wn4hh\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.350246 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-log-httpd\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.350329 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452231 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-log-httpd\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452285 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452309 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452362 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-run-httpd\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452383 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-scripts\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452420 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-config-data\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452661 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4hh\" (UniqueName: \"kubernetes.io/projected/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-kube-api-access-wn4hh\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452960 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-run-httpd\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.452965 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-log-httpd\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.457069 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.459046 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.459705 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-config-data\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.460361 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-scripts\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.468331 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4hh\" (UniqueName: \"kubernetes.io/projected/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-kube-api-access-wn4hh\") pod \"ceilometer-0\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " pod="openstack/ceilometer-0" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.509512 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5c3859-9070-4d84-a4e5-812685fffc00" path="/var/lib/kubelet/pods/bf5c3859-9070-4d84-a4e5-812685fffc00/volumes" Nov 26 07:19:56 crc kubenswrapper[4909]: I1126 07:19:56.509737 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:19:57 crc kubenswrapper[4909]: I1126 07:19:57.009789 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:57 crc kubenswrapper[4909]: I1126 07:19:57.129772 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerStarted","Data":"f8ac5bfececa9711abf07c2375916882548da24fa60405a9451171547e211cf2"} Nov 26 07:19:57 crc kubenswrapper[4909]: I1126 07:19:57.131663 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edba305d-f8e6-4ab0-ae68-30b668037813","Type":"ContainerStarted","Data":"b589d87e51374dd79f69c4819dca7d38374f8adb25ebf560946dbab0a7dc7461"} Nov 26 07:19:57 crc kubenswrapper[4909]: I1126 07:19:57.158442 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.15841594 podStartE2EDuration="3.15841594s" podCreationTimestamp="2025-11-26 07:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:19:57.150848698 +0000 UTC m=+1169.297059884" watchObservedRunningTime="2025-11-26 07:19:57.15841594 +0000 UTC m=+1169.304627106" Nov 26 07:19:57 crc kubenswrapper[4909]: I1126 07:19:57.305581 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:19:58 crc kubenswrapper[4909]: I1126 07:19:58.183534 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerStarted","Data":"79e55c801d1dd6d29d24561e259fb5f4ee7ea4e82893e041d06145da68a4e3b1"} Nov 26 07:19:59 crc kubenswrapper[4909]: I1126 07:19:59.198715 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerStarted","Data":"0278710986f25f2206a5fea7eeb97cf9125dba5ef26dcc3029992fe573e594bd"} Nov 26 07:19:59 crc kubenswrapper[4909]: I1126 07:19:59.199054 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerStarted","Data":"cc369d697ec530cae5d53f8f96f3f7022e57071ab440d96739658ebf6cc92cb0"} Nov 26 07:19:59 crc kubenswrapper[4909]: I1126 07:19:59.511921 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.252564 4909 generic.go:334] "Generic (PLEG): container finished" podID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerID="f830e018627073977f605e520fbf64ada9095f6bb653e33ef0ca390f3eb5fabe" exitCode=0 Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.252629 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d655886f6-h56wz" event={"ID":"7aa7dca9-3bc0-4869-b69a-f2bbf2190038","Type":"ContainerDied","Data":"f830e018627073977f605e520fbf64ada9095f6bb653e33ef0ca390f3eb5fabe"} Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.253149 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d655886f6-h56wz" event={"ID":"7aa7dca9-3bc0-4869-b69a-f2bbf2190038","Type":"ContainerDied","Data":"827770e82efc17369e95d0b37fe4a53335d9a70b269490faae753f15d2fdf8ba"} Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.253172 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827770e82efc17369e95d0b37fe4a53335d9a70b269490faae753f15d2fdf8ba" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.289614 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.440880 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-ovndb-tls-certs\") pod \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.441053 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-httpd-config\") pod \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.441084 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-config\") pod \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.441183 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-combined-ca-bundle\") pod \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.441237 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfd2\" (UniqueName: \"kubernetes.io/projected/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-kube-api-access-lwfd2\") pod \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\" (UID: \"7aa7dca9-3bc0-4869-b69a-f2bbf2190038\") " Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.446790 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7aa7dca9-3bc0-4869-b69a-f2bbf2190038" (UID: "7aa7dca9-3bc0-4869-b69a-f2bbf2190038"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.446800 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-kube-api-access-lwfd2" (OuterVolumeSpecName: "kube-api-access-lwfd2") pod "7aa7dca9-3bc0-4869-b69a-f2bbf2190038" (UID: "7aa7dca9-3bc0-4869-b69a-f2bbf2190038"). InnerVolumeSpecName "kube-api-access-lwfd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.499688 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-config" (OuterVolumeSpecName: "config") pod "7aa7dca9-3bc0-4869-b69a-f2bbf2190038" (UID: "7aa7dca9-3bc0-4869-b69a-f2bbf2190038"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.510882 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aa7dca9-3bc0-4869-b69a-f2bbf2190038" (UID: "7aa7dca9-3bc0-4869-b69a-f2bbf2190038"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.522778 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7aa7dca9-3bc0-4869-b69a-f2bbf2190038" (UID: "7aa7dca9-3bc0-4869-b69a-f2bbf2190038"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.542935 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.542966 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.542979 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.542991 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfd2\" (UniqueName: \"kubernetes.io/projected/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-kube-api-access-lwfd2\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:00 crc kubenswrapper[4909]: I1126 07:20:00.543002 4909 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa7dca9-3bc0-4869-b69a-f2bbf2190038-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.263776 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d655886f6-h56wz" Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.264654 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-central-agent" containerID="cri-o://79e55c801d1dd6d29d24561e259fb5f4ee7ea4e82893e041d06145da68a4e3b1" gracePeriod=30 Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.264760 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="proxy-httpd" containerID="cri-o://2f05f9153e7ab70a5d2b666986b4c09eaafcecbcbc6e17a041849f6ebd82d42e" gracePeriod=30 Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.264802 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="sg-core" containerID="cri-o://0278710986f25f2206a5fea7eeb97cf9125dba5ef26dcc3029992fe573e594bd" gracePeriod=30 Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.264830 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-notification-agent" containerID="cri-o://cc369d697ec530cae5d53f8f96f3f7022e57071ab440d96739658ebf6cc92cb0" gracePeriod=30 Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.265046 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerStarted","Data":"2f05f9153e7ab70a5d2b666986b4c09eaafcecbcbc6e17a041849f6ebd82d42e"} Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.265071 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.288901 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.17914231 podStartE2EDuration="5.288879632s" podCreationTimestamp="2025-11-26 07:19:56 +0000 UTC" firstStartedPulling="2025-11-26 07:19:57.017491186 +0000 UTC m=+1169.163702352" lastFinishedPulling="2025-11-26 07:20:00.127228508 +0000 UTC m=+1172.273439674" observedRunningTime="2025-11-26 07:20:01.287030661 +0000 UTC m=+1173.433241827" watchObservedRunningTime="2025-11-26 07:20:01.288879632 +0000 UTC m=+1173.435090798" Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.317839 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d655886f6-h56wz"] Nov 26 07:20:01 crc kubenswrapper[4909]: I1126 07:20:01.325354 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5d655886f6-h56wz"] Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.251882 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.276845 4909 generic.go:334] "Generic (PLEG): container finished" podID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerID="2f05f9153e7ab70a5d2b666986b4c09eaafcecbcbc6e17a041849f6ebd82d42e" exitCode=0 Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.276879 4909 generic.go:334] "Generic (PLEG): container finished" podID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerID="0278710986f25f2206a5fea7eeb97cf9125dba5ef26dcc3029992fe573e594bd" exitCode=2 Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.276889 4909 generic.go:334] "Generic (PLEG): container finished" podID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerID="cc369d697ec530cae5d53f8f96f3f7022e57071ab440d96739658ebf6cc92cb0" exitCode=0 Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.276906 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerDied","Data":"2f05f9153e7ab70a5d2b666986b4c09eaafcecbcbc6e17a041849f6ebd82d42e"} Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.276942 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerDied","Data":"0278710986f25f2206a5fea7eeb97cf9125dba5ef26dcc3029992fe573e594bd"} Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.276954 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerDied","Data":"cc369d697ec530cae5d53f8f96f3f7022e57071ab440d96739658ebf6cc92cb0"} Nov 26 07:20:02 crc kubenswrapper[4909]: I1126 07:20:02.513210 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" path="/var/lib/kubelet/pods/7aa7dca9-3bc0-4869-b69a-f2bbf2190038/volumes" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.331448 4909 generic.go:334] "Generic (PLEG): container finished" podID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerID="79e55c801d1dd6d29d24561e259fb5f4ee7ea4e82893e041d06145da68a4e3b1" exitCode=0 Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.331758 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerDied","Data":"79e55c801d1dd6d29d24561e259fb5f4ee7ea4e82893e041d06145da68a4e3b1"} Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.474486 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.607130 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-combined-ca-bundle\") pod \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.607194 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn4hh\" (UniqueName: \"kubernetes.io/projected/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-kube-api-access-wn4hh\") pod \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.607236 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-scripts\") pod \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.607278 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-log-httpd\") pod \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.607347 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-run-httpd\") pod \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.607363 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-config-data\") pod \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.607389 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-sg-core-conf-yaml\") pod \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\" (UID: \"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa\") " Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.608162 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" (UID: "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.608334 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" (UID: "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.614566 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-scripts" (OuterVolumeSpecName: "scripts") pod "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" (UID: "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.615839 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-kube-api-access-wn4hh" (OuterVolumeSpecName: "kube-api-access-wn4hh") pod "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" (UID: "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa"). InnerVolumeSpecName "kube-api-access-wn4hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.637717 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" (UID: "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.695282 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" (UID: "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.709535 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.709857 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn4hh\" (UniqueName: \"kubernetes.io/projected/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-kube-api-access-wn4hh\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.709868 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.709877 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.709885 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.709895 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.712429 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-config-data" (OuterVolumeSpecName: "config-data") pod "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" (UID: "f13e58a8-6c61-4d7f-83a4-bfef1afc89fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.811151 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.886943 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.887387 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-httpd" containerID="cri-o://a169304f4fdb7a1da8086638385028b6d2efac6ea2de938d901c37a0fadfd111" gracePeriod=30 Nov 26 07:20:03 crc kubenswrapper[4909]: I1126 07:20:03.887303 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-log" containerID="cri-o://7b5ccfe9ae36637c50a8d35cef08c83468a7c9c281bc26f62a66a85db369bc90" gracePeriod=30 Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.341136 4909 generic.go:334] "Generic (PLEG): container finished" podID="56b43716-a21b-439a-ab99-835173e5d8bc" containerID="7b5ccfe9ae36637c50a8d35cef08c83468a7c9c281bc26f62a66a85db369bc90" exitCode=143 Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.341209 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b43716-a21b-439a-ab99-835173e5d8bc","Type":"ContainerDied","Data":"7b5ccfe9ae36637c50a8d35cef08c83468a7c9c281bc26f62a66a85db369bc90"} Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.345057 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f13e58a8-6c61-4d7f-83a4-bfef1afc89fa","Type":"ContainerDied","Data":"f8ac5bfececa9711abf07c2375916882548da24fa60405a9451171547e211cf2"} Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.345105 4909 scope.go:117] "RemoveContainer" containerID="2f05f9153e7ab70a5d2b666986b4c09eaafcecbcbc6e17a041849f6ebd82d42e" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.345224 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.386199 4909 scope.go:117] "RemoveContainer" containerID="0278710986f25f2206a5fea7eeb97cf9125dba5ef26dcc3029992fe573e594bd" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.408092 4909 scope.go:117] "RemoveContainer" containerID="cc369d697ec530cae5d53f8f96f3f7022e57071ab440d96739658ebf6cc92cb0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.414888 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.427192 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.433349 4909 scope.go:117] "RemoveContainer" containerID="79e55c801d1dd6d29d24561e259fb5f4ee7ea4e82893e041d06145da68a4e3b1" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.435320 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:04 crc kubenswrapper[4909]: E1126 07:20:04.435799 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-httpd" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.435822 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-httpd" Nov 26 07:20:04 crc kubenswrapper[4909]: E1126 07:20:04.435851 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="sg-core" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.435860 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="sg-core" Nov 26 07:20:04 crc kubenswrapper[4909]: E1126 07:20:04.435890 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="proxy-httpd" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.435898 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="proxy-httpd" Nov 26 07:20:04 crc kubenswrapper[4909]: E1126 07:20:04.435912 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-api" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.435919 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-api" Nov 26 07:20:04 crc kubenswrapper[4909]: E1126 07:20:04.435931 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-central-agent" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.435939 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-central-agent" Nov 26 07:20:04 crc kubenswrapper[4909]: E1126 07:20:04.435956 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-notification-agent" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.435967 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-notification-agent" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.436159 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-central-agent" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.436174 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="proxy-httpd" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.436188 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-api" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.436202 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa7dca9-3bc0-4869-b69a-f2bbf2190038" containerName="neutron-httpd" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.436212 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="sg-core" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.436224 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" containerName="ceilometer-notification-agent" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.440410 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.442539 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.442756 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.452955 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.511225 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f13e58a8-6c61-4d7f-83a4-bfef1afc89fa" path="/var/lib/kubelet/pods/f13e58a8-6c61-4d7f-83a4-bfef1afc89fa/volumes" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.530809 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.530857 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m774\" (UniqueName: \"kubernetes.io/projected/a2d0d49e-98d5-488c-a739-b43cdb3013a1-kube-api-access-2m774\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.530895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-scripts\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.530930 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.530955 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-config-data\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.530982 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-run-httpd\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.531014 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-log-httpd\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.632930 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-config-data\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.632989 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-run-httpd\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.633038 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-log-httpd\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.633122 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.633143 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m774\" (UniqueName: \"kubernetes.io/projected/a2d0d49e-98d5-488c-a739-b43cdb3013a1-kube-api-access-2m774\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.633180 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-scripts\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.633226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.634282 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-run-httpd\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.634298 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-log-httpd\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.638554 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.638727 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.640531 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-scripts\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.644375 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-config-data\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.652128 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m774\" (UniqueName: \"kubernetes.io/projected/a2d0d49e-98d5-488c-a739-b43cdb3013a1-kube-api-access-2m774\") pod \"ceilometer-0\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " pod="openstack/ceilometer-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.707191 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 26 07:20:04 crc kubenswrapper[4909]: I1126 07:20:04.771422 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:05 crc kubenswrapper[4909]: I1126 07:20:05.272969 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:05 crc kubenswrapper[4909]: W1126 07:20:05.279500 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d0d49e_98d5_488c_a739_b43cdb3013a1.slice/crio-b1ec3979b7040fbb7978c009c9f08b87ea00b7bd43ade0988f45caf6086d0d47 WatchSource:0}: Error finding container b1ec3979b7040fbb7978c009c9f08b87ea00b7bd43ade0988f45caf6086d0d47: Status 404 returned error can't find the container with id b1ec3979b7040fbb7978c009c9f08b87ea00b7bd43ade0988f45caf6086d0d47 Nov 26 07:20:05 crc kubenswrapper[4909]: I1126 07:20:05.300647 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:20:05 crc kubenswrapper[4909]: I1126 07:20:05.300874 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-log" containerID="cri-o://15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9" gracePeriod=30 Nov 26 07:20:05 crc kubenswrapper[4909]: I1126 07:20:05.300984 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-httpd" containerID="cri-o://9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220" gracePeriod=30 Nov 26 07:20:05 crc kubenswrapper[4909]: I1126 07:20:05.354756 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerStarted","Data":"b1ec3979b7040fbb7978c009c9f08b87ea00b7bd43ade0988f45caf6086d0d47"} Nov 26 07:20:05 crc kubenswrapper[4909]: I1126 07:20:05.587421 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:06 crc kubenswrapper[4909]: I1126 07:20:06.370758 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerStarted","Data":"da1179331683703744dab835e1520b28f87db03e32e364ebd254b655f44ab702"} Nov 26 07:20:06 crc kubenswrapper[4909]: I1126 07:20:06.373462 4909 generic.go:334] "Generic (PLEG): container finished" podID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerID="15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9" exitCode=143 Nov 26 07:20:06 crc kubenswrapper[4909]: I1126 07:20:06.373501 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14","Type":"ContainerDied","Data":"15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9"} Nov 26 07:20:06 crc kubenswrapper[4909]: I1126 07:20:06.933535 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1964-account-create-gb24n"] Nov 26 07:20:06 crc kubenswrapper[4909]: I1126 07:20:06.938630 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1964-account-create-gb24n" Nov 26 07:20:06 crc kubenswrapper[4909]: I1126 07:20:06.947765 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 26 07:20:06 crc kubenswrapper[4909]: I1126 07:20:06.966150 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1964-account-create-gb24n"] Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.075853 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhzr\" (UniqueName: \"kubernetes.io/projected/2c61b861-dea8-48b1-a0f3-aeec4d1cb973-kube-api-access-pbhzr\") pod \"nova-api-1964-account-create-gb24n\" (UID: \"2c61b861-dea8-48b1-a0f3-aeec4d1cb973\") " pod="openstack/nova-api-1964-account-create-gb24n" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.132064 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-50ac-account-create-c58tq"] Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.133821 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ac-account-create-c58tq" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.135728 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.138619 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-50ac-account-create-c58tq"] Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.177389 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbhzr\" (UniqueName: \"kubernetes.io/projected/2c61b861-dea8-48b1-a0f3-aeec4d1cb973-kube-api-access-pbhzr\") pod \"nova-api-1964-account-create-gb24n\" (UID: \"2c61b861-dea8-48b1-a0f3-aeec4d1cb973\") " pod="openstack/nova-api-1964-account-create-gb24n" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.194852 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbhzr\" (UniqueName: \"kubernetes.io/projected/2c61b861-dea8-48b1-a0f3-aeec4d1cb973-kube-api-access-pbhzr\") pod \"nova-api-1964-account-create-gb24n\" (UID: \"2c61b861-dea8-48b1-a0f3-aeec4d1cb973\") " pod="openstack/nova-api-1964-account-create-gb24n" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.278914 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjjh\" (UniqueName: \"kubernetes.io/projected/0aedc9a8-307e-4ea5-bc63-b6c661275773-kube-api-access-jdjjh\") pod \"nova-cell0-50ac-account-create-c58tq\" (UID: \"0aedc9a8-307e-4ea5-bc63-b6c661275773\") " pod="openstack/nova-cell0-50ac-account-create-c58tq" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.283049 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1964-account-create-gb24n" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.346397 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8b90-account-create-t78ss"] Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.348244 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8b90-account-create-t78ss" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.351796 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.364951 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8b90-account-create-t78ss"] Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.380419 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjjh\" (UniqueName: \"kubernetes.io/projected/0aedc9a8-307e-4ea5-bc63-b6c661275773-kube-api-access-jdjjh\") pod \"nova-cell0-50ac-account-create-c58tq\" (UID: \"0aedc9a8-307e-4ea5-bc63-b6c661275773\") " pod="openstack/nova-cell0-50ac-account-create-c58tq" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.388290 4909 generic.go:334] "Generic (PLEG): container finished" podID="56b43716-a21b-439a-ab99-835173e5d8bc" containerID="a169304f4fdb7a1da8086638385028b6d2efac6ea2de938d901c37a0fadfd111" exitCode=0 Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.388377 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b43716-a21b-439a-ab99-835173e5d8bc","Type":"ContainerDied","Data":"a169304f4fdb7a1da8086638385028b6d2efac6ea2de938d901c37a0fadfd111"} Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.390109 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerStarted","Data":"fb7ba41d1b508a19c8614b06d5b351863aa4f60b8d3c20fada30c698bfaa47b8"} Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.418024 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjjh\" (UniqueName: \"kubernetes.io/projected/0aedc9a8-307e-4ea5-bc63-b6c661275773-kube-api-access-jdjjh\") pod \"nova-cell0-50ac-account-create-c58tq\" (UID: \"0aedc9a8-307e-4ea5-bc63-b6c661275773\") " pod="openstack/nova-cell0-50ac-account-create-c58tq" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.483156 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6k7l\" (UniqueName: \"kubernetes.io/projected/fdcdfe5a-4464-43fe-94de-09f58a3f7a46-kube-api-access-l6k7l\") pod \"nova-cell1-8b90-account-create-t78ss\" (UID: \"fdcdfe5a-4464-43fe-94de-09f58a3f7a46\") " pod="openstack/nova-cell1-8b90-account-create-t78ss" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.542982 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.581416 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ac-account-create-c58tq" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.586124 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6k7l\" (UniqueName: \"kubernetes.io/projected/fdcdfe5a-4464-43fe-94de-09f58a3f7a46-kube-api-access-l6k7l\") pod \"nova-cell1-8b90-account-create-t78ss\" (UID: \"fdcdfe5a-4464-43fe-94de-09f58a3f7a46\") " pod="openstack/nova-cell1-8b90-account-create-t78ss" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.610361 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6k7l\" (UniqueName: \"kubernetes.io/projected/fdcdfe5a-4464-43fe-94de-09f58a3f7a46-kube-api-access-l6k7l\") pod \"nova-cell1-8b90-account-create-t78ss\" (UID: \"fdcdfe5a-4464-43fe-94de-09f58a3f7a46\") " pod="openstack/nova-cell1-8b90-account-create-t78ss" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.686091 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8b90-account-create-t78ss" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688247 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-combined-ca-bundle\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688304 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj692\" (UniqueName: \"kubernetes.io/projected/56b43716-a21b-439a-ab99-835173e5d8bc-kube-api-access-nj692\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688351 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-httpd-run\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688436 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-public-tls-certs\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688499 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-config-data\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688523 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688566 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-scripts\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.688600 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-logs\") pod \"56b43716-a21b-439a-ab99-835173e5d8bc\" (UID: \"56b43716-a21b-439a-ab99-835173e5d8bc\") " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.689388 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-logs" (OuterVolumeSpecName: "logs") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.690104 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.691888 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.692386 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-scripts" (OuterVolumeSpecName: "scripts") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.711878 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b43716-a21b-439a-ab99-835173e5d8bc-kube-api-access-nj692" (OuterVolumeSpecName: "kube-api-access-nj692") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "kube-api-access-nj692". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.753945 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-config-data" (OuterVolumeSpecName: "config-data") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.765000 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.773753 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "56b43716-a21b-439a-ab99-835173e5d8bc" (UID: "56b43716-a21b-439a-ab99-835173e5d8bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790485 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790515 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790549 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790560 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790568 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790577 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b43716-a21b-439a-ab99-835173e5d8bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790586 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj692\" (UniqueName: \"kubernetes.io/projected/56b43716-a21b-439a-ab99-835173e5d8bc-kube-api-access-nj692\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.790611 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b43716-a21b-439a-ab99-835173e5d8bc-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.814753 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 26 07:20:07 crc kubenswrapper[4909]: W1126 07:20:07.877300 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c61b861_dea8_48b1_a0f3_aeec4d1cb973.slice/crio-01d4fd35d63c05c6fcc40940ae81edbf4b5bc1dc9af0c464f415de9cb3fa0f9c WatchSource:0}: Error finding container 01d4fd35d63c05c6fcc40940ae81edbf4b5bc1dc9af0c464f415de9cb3fa0f9c: Status 404 returned error can't find the container with id 01d4fd35d63c05c6fcc40940ae81edbf4b5bc1dc9af0c464f415de9cb3fa0f9c Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.884452 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1964-account-create-gb24n"] Nov 26 07:20:07 crc kubenswrapper[4909]: I1126 07:20:07.891963 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.099882 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-50ac-account-create-c58tq"] Nov 26 07:20:08 crc kubenswrapper[4909]: W1126 07:20:08.119329 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aedc9a8_307e_4ea5_bc63_b6c661275773.slice/crio-45da4a9004175364ba46a692c372e754a021e3f5b95d6567340dfa7d5cd7acef WatchSource:0}: Error finding container 45da4a9004175364ba46a692c372e754a021e3f5b95d6567340dfa7d5cd7acef: Status 404 returned error can't find the container with id 45da4a9004175364ba46a692c372e754a021e3f5b95d6567340dfa7d5cd7acef Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.239069 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8b90-account-create-t78ss"] Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.404474 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b43716-a21b-439a-ab99-835173e5d8bc","Type":"ContainerDied","Data":"3e60e06149539ab7039cac6ee8dde8b70c1ef81090ac3e0a9b1c3738326d395f"} Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.404532 4909 scope.go:117] "RemoveContainer" containerID="a169304f4fdb7a1da8086638385028b6d2efac6ea2de938d901c37a0fadfd111" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.404698 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.407486 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-50ac-account-create-c58tq" event={"ID":"0aedc9a8-307e-4ea5-bc63-b6c661275773","Type":"ContainerStarted","Data":"45da4a9004175364ba46a692c372e754a021e3f5b95d6567340dfa7d5cd7acef"} Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.417676 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerStarted","Data":"cd20b82b6cb255cb8ec632073b8e9990c8b02851b60c6247e0a5fe9e8944b3ee"} Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.422027 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8b90-account-create-t78ss" event={"ID":"fdcdfe5a-4464-43fe-94de-09f58a3f7a46","Type":"ContainerStarted","Data":"32227aa35f7d32c38f3423d81c1a433c907d144af97ab022171ff6a45dda0e7d"} Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.438478 4909 generic.go:334] "Generic (PLEG): container finished" podID="2c61b861-dea8-48b1-a0f3-aeec4d1cb973" containerID="cd1005b99a18b17cad8c1d2e3152591e37e1fb5b2ad03cedfc3c842868488ea8" exitCode=0 Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.438523 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1964-account-create-gb24n" event={"ID":"2c61b861-dea8-48b1-a0f3-aeec4d1cb973","Type":"ContainerDied","Data":"cd1005b99a18b17cad8c1d2e3152591e37e1fb5b2ad03cedfc3c842868488ea8"} Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.438546 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1964-account-create-gb24n" event={"ID":"2c61b861-dea8-48b1-a0f3-aeec4d1cb973","Type":"ContainerStarted","Data":"01d4fd35d63c05c6fcc40940ae81edbf4b5bc1dc9af0c464f415de9cb3fa0f9c"} Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.447305 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.469660 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.470395 4909 scope.go:117] "RemoveContainer" containerID="7b5ccfe9ae36637c50a8d35cef08c83468a7c9c281bc26f62a66a85db369bc90" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.474466 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:20:08 crc kubenswrapper[4909]: E1126 07:20:08.474858 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-log" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.474876 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-log" Nov 26 07:20:08 crc kubenswrapper[4909]: E1126 07:20:08.474900 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-httpd" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.474915 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-httpd" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.475145 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-httpd" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.475182 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" containerName="glance-log" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.476122 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.479056 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.479229 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.543882 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b43716-a21b-439a-ab99-835173e5d8bc" path="/var/lib/kubelet/pods/56b43716-a21b-439a-ab99-835173e5d8bc/volumes" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.544816 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619245 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619343 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-logs\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619404 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619465 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619543 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f77tf\" (UniqueName: \"kubernetes.io/projected/6791905e-4b74-417e-bc1b-0747eac5878e-kube-api-access-f77tf\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619602 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619645 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.619707 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721317 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721611 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721660 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721687 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721729 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-logs\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721763 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721794 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.721836 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f77tf\" (UniqueName: \"kubernetes.io/projected/6791905e-4b74-417e-bc1b-0747eac5878e-kube-api-access-f77tf\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.722629 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-logs\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.722872 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.722948 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.726827 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.727604 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.728176 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.730993 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.742378 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f77tf\" (UniqueName: \"kubernetes.io/projected/6791905e-4b74-417e-bc1b-0747eac5878e-kube-api-access-f77tf\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.750338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " pod="openstack/glance-default-external-api-0" Nov 26 07:20:08 crc kubenswrapper[4909]: I1126 07:20:08.894924 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.155108 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.231022 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlmt7\" (UniqueName: \"kubernetes.io/projected/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-kube-api-access-xlmt7\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.231269 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.231535 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-internal-tls-certs\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.231705 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-config-data\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.232195 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-httpd-run\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.232410 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-logs\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.232495 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-scripts\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.232578 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.232759 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-combined-ca-bundle\") pod \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\" (UID: \"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14\") " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.232910 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-logs" (OuterVolumeSpecName: "logs") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.233497 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.233610 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.238949 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.239052 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-scripts" (OuterVolumeSpecName: "scripts") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.259030 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-kube-api-access-xlmt7" (OuterVolumeSpecName: "kube-api-access-xlmt7") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "kube-api-access-xlmt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.261745 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.297251 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-config-data" (OuterVolumeSpecName: "config-data") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.313779 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" (UID: "68e9e66f-ffe3-49f0-ba72-de59a7f2ec14"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.335438 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.335480 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.335498 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlmt7\" (UniqueName: \"kubernetes.io/projected/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-kube-api-access-xlmt7\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.335535 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.335550 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.335561 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.357405 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.437711 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.452455 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerStarted","Data":"77a55daed39f1df8e0111c410dc163e4c956a76e2dbeb26fb2c46f8a0c83a4c9"} Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.452722 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-central-agent" containerID="cri-o://da1179331683703744dab835e1520b28f87db03e32e364ebd254b655f44ab702" gracePeriod=30 Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.452797 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.453282 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="proxy-httpd" containerID="cri-o://77a55daed39f1df8e0111c410dc163e4c956a76e2dbeb26fb2c46f8a0c83a4c9" gracePeriod=30 Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.453422 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-notification-agent" containerID="cri-o://fb7ba41d1b508a19c8614b06d5b351863aa4f60b8d3c20fada30c698bfaa47b8" gracePeriod=30 Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.453481 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="sg-core" containerID="cri-o://cd20b82b6cb255cb8ec632073b8e9990c8b02851b60c6247e0a5fe9e8944b3ee" gracePeriod=30 Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.458427 4909 generic.go:334] "Generic (PLEG): container finished" podID="fdcdfe5a-4464-43fe-94de-09f58a3f7a46" containerID="327c87ed3286e637f70f01904430f88a5b31dee163ca3d8ba6a36eb76ab58adb" exitCode=0 Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.458478 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8b90-account-create-t78ss" event={"ID":"fdcdfe5a-4464-43fe-94de-09f58a3f7a46","Type":"ContainerDied","Data":"327c87ed3286e637f70f01904430f88a5b31dee163ca3d8ba6a36eb76ab58adb"} Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.463148 4909 generic.go:334] "Generic (PLEG): container finished" podID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerID="9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220" exitCode=0 Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.463212 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14","Type":"ContainerDied","Data":"9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220"} Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.463222 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.463238 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68e9e66f-ffe3-49f0-ba72-de59a7f2ec14","Type":"ContainerDied","Data":"973e7273041589fa4f53d226643d93658fe0bcd3729c3228443fdcc42e8192d3"} Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.463256 4909 scope.go:117] "RemoveContainer" containerID="9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.486386 4909 generic.go:334] "Generic (PLEG): container finished" podID="0aedc9a8-307e-4ea5-bc63-b6c661275773" containerID="13767ceb28e0bc3faa7301a3c2f022aac45d597e934be4c4881db240dd43bde0" exitCode=0 Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.486577 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-50ac-account-create-c58tq" event={"ID":"0aedc9a8-307e-4ea5-bc63-b6c661275773","Type":"ContainerDied","Data":"13767ceb28e0bc3faa7301a3c2f022aac45d597e934be4c4881db240dd43bde0"} Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.495800 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.399874819 podStartE2EDuration="5.495785546s" podCreationTimestamp="2025-11-26 07:20:04 +0000 UTC" firstStartedPulling="2025-11-26 07:20:05.281198098 +0000 UTC m=+1177.427409254" lastFinishedPulling="2025-11-26 07:20:08.377108805 +0000 UTC m=+1180.523319981" observedRunningTime="2025-11-26 07:20:09.475712125 +0000 UTC m=+1181.621923291" watchObservedRunningTime="2025-11-26 07:20:09.495785546 +0000 UTC m=+1181.641996712" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.570869 4909 scope.go:117] "RemoveContainer" containerID="15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.571064 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.584826 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.595029 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.626611 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:20:09 crc kubenswrapper[4909]: E1126 07:20:09.627038 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-log" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.627053 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-log" Nov 26 07:20:09 crc kubenswrapper[4909]: E1126 07:20:09.627070 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-httpd" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.627075 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-httpd" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.627225 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-log" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.627235 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" containerName="glance-httpd" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.628153 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.629950 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.630246 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.666708 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.678943 4909 scope.go:117] "RemoveContainer" containerID="9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220" Nov 26 07:20:09 crc kubenswrapper[4909]: E1126 07:20:09.679717 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220\": container with ID starting with 9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220 not found: ID does not exist" containerID="9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.679748 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220"} err="failed to get container status \"9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220\": rpc error: code = NotFound desc = could not find container \"9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220\": container with ID starting with 9b30df4258e252c28ddf9093fff28982e9acf9d10fa5203b020d36418d86f220 not found: ID does not exist" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.679774 4909 scope.go:117] "RemoveContainer" containerID="15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9" Nov 26 07:20:09 crc kubenswrapper[4909]: E1126 07:20:09.680231 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9\": container with ID starting with 15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9 not found: ID does not exist" containerID="15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.680253 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9"} err="failed to get container status \"15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9\": rpc error: code = NotFound desc = could not find container \"15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9\": container with ID starting with 15dabfff076ba122c47f79a22d17dc909005c4125a4ad424f16a7ed3c12bf6b9 not found: ID does not exist" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.743238 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5pl\" (UniqueName: \"kubernetes.io/projected/0fdde234-058b-4e39-a647-b87669d3fda5-kube-api-access-6s5pl\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.743335 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.743372 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.743417 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.743447 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.743515 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.743541 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.747406 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849401 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849462 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s5pl\" (UniqueName: \"kubernetes.io/projected/0fdde234-058b-4e39-a647-b87669d3fda5-kube-api-access-6s5pl\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849535 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849572 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849644 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849678 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849743 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849778 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.849973 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.850375 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.859986 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.864311 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.869087 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.871337 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.873377 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.892005 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s5pl\" (UniqueName: \"kubernetes.io/projected/0fdde234-058b-4e39-a647-b87669d3fda5-kube-api-access-6s5pl\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.913927 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " pod="openstack/glance-default-internal-api-0" Nov 26 07:20:09 crc kubenswrapper[4909]: I1126 07:20:09.969006 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.154282 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1964-account-create-gb24n" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.261287 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbhzr\" (UniqueName: \"kubernetes.io/projected/2c61b861-dea8-48b1-a0f3-aeec4d1cb973-kube-api-access-pbhzr\") pod \"2c61b861-dea8-48b1-a0f3-aeec4d1cb973\" (UID: \"2c61b861-dea8-48b1-a0f3-aeec4d1cb973\") " Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.266493 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c61b861-dea8-48b1-a0f3-aeec4d1cb973-kube-api-access-pbhzr" (OuterVolumeSpecName: "kube-api-access-pbhzr") pod "2c61b861-dea8-48b1-a0f3-aeec4d1cb973" (UID: "2c61b861-dea8-48b1-a0f3-aeec4d1cb973"). InnerVolumeSpecName "kube-api-access-pbhzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.364040 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbhzr\" (UniqueName: \"kubernetes.io/projected/2c61b861-dea8-48b1-a0f3-aeec4d1cb973-kube-api-access-pbhzr\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.498655 4909 generic.go:334] "Generic (PLEG): container finished" podID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerID="77a55daed39f1df8e0111c410dc163e4c956a76e2dbeb26fb2c46f8a0c83a4c9" exitCode=0 Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.498690 4909 generic.go:334] "Generic (PLEG): container finished" podID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerID="cd20b82b6cb255cb8ec632073b8e9990c8b02851b60c6247e0a5fe9e8944b3ee" exitCode=2 Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.498698 4909 generic.go:334] "Generic (PLEG): container finished" podID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerID="fb7ba41d1b508a19c8614b06d5b351863aa4f60b8d3c20fada30c698bfaa47b8" exitCode=0 Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.502996 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1964-account-create-gb24n" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.514467 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e9e66f-ffe3-49f0-ba72-de59a7f2ec14" path="/var/lib/kubelet/pods/68e9e66f-ffe3-49f0-ba72-de59a7f2ec14/volumes" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.515943 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerDied","Data":"77a55daed39f1df8e0111c410dc163e4c956a76e2dbeb26fb2c46f8a0c83a4c9"} Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.515969 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerDied","Data":"cd20b82b6cb255cb8ec632073b8e9990c8b02851b60c6247e0a5fe9e8944b3ee"} Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.516007 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerDied","Data":"fb7ba41d1b508a19c8614b06d5b351863aa4f60b8d3c20fada30c698bfaa47b8"} Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.516017 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1964-account-create-gb24n" event={"ID":"2c61b861-dea8-48b1-a0f3-aeec4d1cb973","Type":"ContainerDied","Data":"01d4fd35d63c05c6fcc40940ae81edbf4b5bc1dc9af0c464f415de9cb3fa0f9c"} Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.516027 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d4fd35d63c05c6fcc40940ae81edbf4b5bc1dc9af0c464f415de9cb3fa0f9c" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.516039 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6791905e-4b74-417e-bc1b-0747eac5878e","Type":"ContainerStarted","Data":"78f1adaa8690f5dd5cda40c513600bbf47bbcd1e19dbec596d90cf360a9cce71"} Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.774006 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.891389 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ac-account-create-c58tq" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.935862 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8b90-account-create-t78ss" Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.975416 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdjjh\" (UniqueName: \"kubernetes.io/projected/0aedc9a8-307e-4ea5-bc63-b6c661275773-kube-api-access-jdjjh\") pod \"0aedc9a8-307e-4ea5-bc63-b6c661275773\" (UID: \"0aedc9a8-307e-4ea5-bc63-b6c661275773\") " Nov 26 07:20:10 crc kubenswrapper[4909]: I1126 07:20:10.980014 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aedc9a8-307e-4ea5-bc63-b6c661275773-kube-api-access-jdjjh" (OuterVolumeSpecName: "kube-api-access-jdjjh") pod "0aedc9a8-307e-4ea5-bc63-b6c661275773" (UID: "0aedc9a8-307e-4ea5-bc63-b6c661275773"). InnerVolumeSpecName "kube-api-access-jdjjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.077230 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6k7l\" (UniqueName: \"kubernetes.io/projected/fdcdfe5a-4464-43fe-94de-09f58a3f7a46-kube-api-access-l6k7l\") pod \"fdcdfe5a-4464-43fe-94de-09f58a3f7a46\" (UID: \"fdcdfe5a-4464-43fe-94de-09f58a3f7a46\") " Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.077880 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdjjh\" (UniqueName: \"kubernetes.io/projected/0aedc9a8-307e-4ea5-bc63-b6c661275773-kube-api-access-jdjjh\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.081276 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcdfe5a-4464-43fe-94de-09f58a3f7a46-kube-api-access-l6k7l" (OuterVolumeSpecName: "kube-api-access-l6k7l") pod "fdcdfe5a-4464-43fe-94de-09f58a3f7a46" (UID: "fdcdfe5a-4464-43fe-94de-09f58a3f7a46"). InnerVolumeSpecName "kube-api-access-l6k7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.179901 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6k7l\" (UniqueName: \"kubernetes.io/projected/fdcdfe5a-4464-43fe-94de-09f58a3f7a46-kube-api-access-l6k7l\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.521340 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6791905e-4b74-417e-bc1b-0747eac5878e","Type":"ContainerStarted","Data":"6e9c74d44de9181ab6e32d80d4b92f4cc0c240f37302b35ea2fea0bdf1f79435"} Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.523246 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8b90-account-create-t78ss" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.523235 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8b90-account-create-t78ss" event={"ID":"fdcdfe5a-4464-43fe-94de-09f58a3f7a46","Type":"ContainerDied","Data":"32227aa35f7d32c38f3423d81c1a433c907d144af97ab022171ff6a45dda0e7d"} Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.523379 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32227aa35f7d32c38f3423d81c1a433c907d144af97ab022171ff6a45dda0e7d" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.525218 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-50ac-account-create-c58tq" event={"ID":"0aedc9a8-307e-4ea5-bc63-b6c661275773","Type":"ContainerDied","Data":"45da4a9004175364ba46a692c372e754a021e3f5b95d6567340dfa7d5cd7acef"} Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.525243 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45da4a9004175364ba46a692c372e754a021e3f5b95d6567340dfa7d5cd7acef" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.525295 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ac-account-create-c58tq" Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.531252 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fdde234-058b-4e39-a647-b87669d3fda5","Type":"ContainerStarted","Data":"346c5a3945f1eaf91c1fcf6d0365d419473a3372fe0402c424978821010165bc"} Nov 26 07:20:11 crc kubenswrapper[4909]: I1126 07:20:11.531290 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fdde234-058b-4e39-a647-b87669d3fda5","Type":"ContainerStarted","Data":"4778216886b7c052c38eb6655f0ac9e2b5ab33d3d886636dfd118e1069502765"} Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.383360 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k8rzm"] Nov 26 07:20:12 crc kubenswrapper[4909]: E1126 07:20:12.383959 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aedc9a8-307e-4ea5-bc63-b6c661275773" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.383976 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aedc9a8-307e-4ea5-bc63-b6c661275773" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: E1126 07:20:12.384012 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c61b861-dea8-48b1-a0f3-aeec4d1cb973" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.384019 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c61b861-dea8-48b1-a0f3-aeec4d1cb973" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: E1126 07:20:12.384040 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcdfe5a-4464-43fe-94de-09f58a3f7a46" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.384047 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcdfe5a-4464-43fe-94de-09f58a3f7a46" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.384402 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aedc9a8-307e-4ea5-bc63-b6c661275773" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.384426 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c61b861-dea8-48b1-a0f3-aeec4d1cb973" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.384442 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcdfe5a-4464-43fe-94de-09f58a3f7a46" containerName="mariadb-account-create" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.385063 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.387010 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5zvnt" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.387894 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.389458 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.391380 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k8rzm"] Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.508930 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.509022 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-scripts\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.509039 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhx2\" (UniqueName: \"kubernetes.io/projected/d5f47300-df0d-451f-bc80-feec784391ec-kube-api-access-7jhx2\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.509068 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-config-data\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.545058 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fdde234-058b-4e39-a647-b87669d3fda5","Type":"ContainerStarted","Data":"c6050544ee612b28a902864fcf0420d3aca003e37b8bf69449abc96dd1260ebc"} Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.549864 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6791905e-4b74-417e-bc1b-0747eac5878e","Type":"ContainerStarted","Data":"9af326558746ae1f5b6fd43ef25bfc03798d1031c68245c1a4e7bd66e604b033"} Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.570218 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.57019645 podStartE2EDuration="3.57019645s" podCreationTimestamp="2025-11-26 07:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:12.562219536 +0000 UTC m=+1184.708430702" watchObservedRunningTime="2025-11-26 07:20:12.57019645 +0000 UTC m=+1184.716407626" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.588297 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.588274346 podStartE2EDuration="4.588274346s" podCreationTimestamp="2025-11-26 07:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:12.585940101 +0000 UTC m=+1184.732151267" watchObservedRunningTime="2025-11-26 07:20:12.588274346 +0000 UTC m=+1184.734485522" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.610865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-scripts\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.610903 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhx2\" (UniqueName: \"kubernetes.io/projected/d5f47300-df0d-451f-bc80-feec784391ec-kube-api-access-7jhx2\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.610962 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-config-data\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.611074 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.616369 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-scripts\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.616559 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-config-data\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.618971 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.628795 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhx2\" (UniqueName: \"kubernetes.io/projected/d5f47300-df0d-451f-bc80-feec784391ec-kube-api-access-7jhx2\") pod \"nova-cell0-conductor-db-sync-k8rzm\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:12 crc kubenswrapper[4909]: I1126 07:20:12.707704 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:13 crc kubenswrapper[4909]: W1126 07:20:13.232007 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f47300_df0d_451f_bc80_feec784391ec.slice/crio-80efef6b15f286abc56a67351b4e09922577ac022ba79bc04f679b5638ef0c47 WatchSource:0}: Error finding container 80efef6b15f286abc56a67351b4e09922577ac022ba79bc04f679b5638ef0c47: Status 404 returned error can't find the container with id 80efef6b15f286abc56a67351b4e09922577ac022ba79bc04f679b5638ef0c47 Nov 26 07:20:13 crc kubenswrapper[4909]: I1126 07:20:13.236008 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k8rzm"] Nov 26 07:20:13 crc kubenswrapper[4909]: I1126 07:20:13.562019 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" event={"ID":"d5f47300-df0d-451f-bc80-feec784391ec","Type":"ContainerStarted","Data":"80efef6b15f286abc56a67351b4e09922577ac022ba79bc04f679b5638ef0c47"} Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.573011 4909 generic.go:334] "Generic (PLEG): container finished" podID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerID="da1179331683703744dab835e1520b28f87db03e32e364ebd254b655f44ab702" exitCode=0 Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.573095 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerDied","Data":"da1179331683703744dab835e1520b28f87db03e32e364ebd254b655f44ab702"} Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.573467 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2d0d49e-98d5-488c-a739-b43cdb3013a1","Type":"ContainerDied","Data":"b1ec3979b7040fbb7978c009c9f08b87ea00b7bd43ade0988f45caf6086d0d47"} Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.573481 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1ec3979b7040fbb7978c009c9f08b87ea00b7bd43ade0988f45caf6086d0d47" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.637126 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.760175 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-config-data\") pod \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.761012 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m774\" (UniqueName: \"kubernetes.io/projected/a2d0d49e-98d5-488c-a739-b43cdb3013a1-kube-api-access-2m774\") pod \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.761083 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-run-httpd\") pod \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.761144 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-scripts\") pod \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.761540 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2d0d49e-98d5-488c-a739-b43cdb3013a1" (UID: "a2d0d49e-98d5-488c-a739-b43cdb3013a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.761686 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-combined-ca-bundle\") pod \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.761754 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-log-httpd\") pod \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.761899 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-sg-core-conf-yaml\") pod \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\" (UID: \"a2d0d49e-98d5-488c-a739-b43cdb3013a1\") " Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.762444 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.762486 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2d0d49e-98d5-488c-a739-b43cdb3013a1" (UID: "a2d0d49e-98d5-488c-a739-b43cdb3013a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.766624 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-scripts" (OuterVolumeSpecName: "scripts") pod "a2d0d49e-98d5-488c-a739-b43cdb3013a1" (UID: "a2d0d49e-98d5-488c-a739-b43cdb3013a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.767827 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d0d49e-98d5-488c-a739-b43cdb3013a1-kube-api-access-2m774" (OuterVolumeSpecName: "kube-api-access-2m774") pod "a2d0d49e-98d5-488c-a739-b43cdb3013a1" (UID: "a2d0d49e-98d5-488c-a739-b43cdb3013a1"). InnerVolumeSpecName "kube-api-access-2m774". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.790324 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2d0d49e-98d5-488c-a739-b43cdb3013a1" (UID: "a2d0d49e-98d5-488c-a739-b43cdb3013a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.856907 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2d0d49e-98d5-488c-a739-b43cdb3013a1" (UID: "a2d0d49e-98d5-488c-a739-b43cdb3013a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.864152 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.864179 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.864205 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2d0d49e-98d5-488c-a739-b43cdb3013a1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.864219 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.864231 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m774\" (UniqueName: \"kubernetes.io/projected/a2d0d49e-98d5-488c-a739-b43cdb3013a1-kube-api-access-2m774\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.870695 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-config-data" (OuterVolumeSpecName: "config-data") pod "a2d0d49e-98d5-488c-a739-b43cdb3013a1" (UID: "a2d0d49e-98d5-488c-a739-b43cdb3013a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:14 crc kubenswrapper[4909]: I1126 07:20:14.966959 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d0d49e-98d5-488c-a739-b43cdb3013a1-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.583709 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.623545 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.632791 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.656331 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:15 crc kubenswrapper[4909]: E1126 07:20:15.657001 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="sg-core" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657021 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="sg-core" Nov 26 07:20:15 crc kubenswrapper[4909]: E1126 07:20:15.657035 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-notification-agent" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657041 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-notification-agent" Nov 26 07:20:15 crc kubenswrapper[4909]: E1126 07:20:15.657061 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-central-agent" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657067 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-central-agent" Nov 26 07:20:15 crc kubenswrapper[4909]: E1126 07:20:15.657088 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="proxy-httpd" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657094 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="proxy-httpd" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657296 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-notification-agent" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657322 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="ceilometer-central-agent" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657337 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="proxy-httpd" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.657354 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" containerName="sg-core" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.659296 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.663223 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.663473 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.694196 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.780121 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-scripts\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.780172 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-run-httpd\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.780264 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.780307 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-config-data\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.780390 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.780439 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89t4\" (UniqueName: \"kubernetes.io/projected/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-kube-api-access-h89t4\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.780466 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-log-httpd\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.882006 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.882053 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89t4\" (UniqueName: \"kubernetes.io/projected/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-kube-api-access-h89t4\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.882079 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-log-httpd\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.882126 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-scripts\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.882151 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-run-httpd\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.882195 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.882226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-config-data\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.883074 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-log-httpd\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.883133 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-run-httpd\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.886793 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-scripts\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.886975 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.887240 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.897731 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89t4\" (UniqueName: \"kubernetes.io/projected/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-kube-api-access-h89t4\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.897823 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-config-data\") pod \"ceilometer-0\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " pod="openstack/ceilometer-0" Nov 26 07:20:15 crc kubenswrapper[4909]: I1126 07:20:15.985604 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:16 crc kubenswrapper[4909]: I1126 07:20:16.514533 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d0d49e-98d5-488c-a739-b43cdb3013a1" path="/var/lib/kubelet/pods/a2d0d49e-98d5-488c-a739-b43cdb3013a1/volumes" Nov 26 07:20:18 crc kubenswrapper[4909]: I1126 07:20:18.896074 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 07:20:18 crc kubenswrapper[4909]: I1126 07:20:18.896484 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 07:20:18 crc kubenswrapper[4909]: I1126 07:20:18.923382 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 07:20:18 crc kubenswrapper[4909]: I1126 07:20:18.934868 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 07:20:19 crc kubenswrapper[4909]: I1126 07:20:19.612946 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:19 crc kubenswrapper[4909]: I1126 07:20:19.623140 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" event={"ID":"d5f47300-df0d-451f-bc80-feec784391ec","Type":"ContainerStarted","Data":"0e0ab69dd6fadaaf1295f81a6d9010207c24862ab74ebb9d59037e0435c21992"} Nov 26 07:20:19 crc kubenswrapper[4909]: I1126 07:20:19.623578 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 07:20:19 crc kubenswrapper[4909]: I1126 07:20:19.623779 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 07:20:19 crc kubenswrapper[4909]: W1126 07:20:19.625142 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod674d21c8_e4a1_426b_a92f_0cb3ce669cdd.slice/crio-d3a92d9bbde00b29cca80ffd9134d6e714dbf71980866ecbb57b9644d7476228 WatchSource:0}: Error finding container d3a92d9bbde00b29cca80ffd9134d6e714dbf71980866ecbb57b9644d7476228: Status 404 returned error can't find the container with id d3a92d9bbde00b29cca80ffd9134d6e714dbf71980866ecbb57b9644d7476228 Nov 26 07:20:19 crc kubenswrapper[4909]: I1126 07:20:19.637775 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" podStartSLOduration=1.630775136 podStartE2EDuration="7.637752743s" podCreationTimestamp="2025-11-26 07:20:12 +0000 UTC" firstStartedPulling="2025-11-26 07:20:13.238749333 +0000 UTC m=+1185.384960499" lastFinishedPulling="2025-11-26 07:20:19.24572694 +0000 UTC m=+1191.391938106" observedRunningTime="2025-11-26 07:20:19.63693686 +0000 UTC m=+1191.783148026" watchObservedRunningTime="2025-11-26 07:20:19.637752743 +0000 UTC m=+1191.783963919" Nov 26 07:20:19 crc kubenswrapper[4909]: I1126 07:20:19.969333 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:19 crc kubenswrapper[4909]: I1126 07:20:19.969618 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:20 crc kubenswrapper[4909]: I1126 07:20:20.001559 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:20 crc kubenswrapper[4909]: I1126 07:20:20.022534 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:20 crc kubenswrapper[4909]: I1126 07:20:20.636355 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerStarted","Data":"c09272442306797377a34fea6b825f4a3bfc6ac3adcdca53ab3b1b04d411465b"} Nov 26 07:20:20 crc kubenswrapper[4909]: I1126 07:20:20.636846 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:20 crc kubenswrapper[4909]: I1126 07:20:20.636908 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:20 crc kubenswrapper[4909]: I1126 07:20:20.636928 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerStarted","Data":"d3a92d9bbde00b29cca80ffd9134d6e714dbf71980866ecbb57b9644d7476228"} Nov 26 07:20:21 crc kubenswrapper[4909]: I1126 07:20:21.648742 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerStarted","Data":"23163dadbbe584f437f6be1e20951a2a39fd9926b64bb83d12f127d7596c4452"} Nov 26 07:20:21 crc kubenswrapper[4909]: I1126 07:20:21.702354 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 07:20:21 crc kubenswrapper[4909]: I1126 07:20:21.702472 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 07:20:21 crc kubenswrapper[4909]: I1126 07:20:21.703460 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 07:20:22 crc kubenswrapper[4909]: I1126 07:20:22.655034 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:22 crc kubenswrapper[4909]: I1126 07:20:22.655351 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 07:20:22 crc kubenswrapper[4909]: I1126 07:20:22.664281 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerStarted","Data":"87737cf7a5b6e04a000e60623e46ccb99f0285de87be4d2bc694518d70089d85"} Nov 26 07:20:23 crc kubenswrapper[4909]: I1126 07:20:23.675623 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerStarted","Data":"dec09c7c9b97a0fda4240ea49e5f945d53afc04db48ec6a96e45a8ac7eb601c5"} Nov 26 07:20:23 crc kubenswrapper[4909]: I1126 07:20:23.676056 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:20:23 crc kubenswrapper[4909]: I1126 07:20:23.696867 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.53939512 podStartE2EDuration="8.696849478s" podCreationTimestamp="2025-11-26 07:20:15 +0000 UTC" firstStartedPulling="2025-11-26 07:20:19.627942839 +0000 UTC m=+1191.774154025" lastFinishedPulling="2025-11-26 07:20:22.785397217 +0000 UTC m=+1194.931608383" observedRunningTime="2025-11-26 07:20:23.696386965 +0000 UTC m=+1195.842598151" watchObservedRunningTime="2025-11-26 07:20:23.696849478 +0000 UTC m=+1195.843060634" Nov 26 07:20:30 crc kubenswrapper[4909]: I1126 07:20:30.743255 4909 generic.go:334] "Generic (PLEG): container finished" podID="d5f47300-df0d-451f-bc80-feec784391ec" containerID="0e0ab69dd6fadaaf1295f81a6d9010207c24862ab74ebb9d59037e0435c21992" exitCode=0 Nov 26 07:20:30 crc kubenswrapper[4909]: I1126 07:20:30.743937 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" event={"ID":"d5f47300-df0d-451f-bc80-feec784391ec","Type":"ContainerDied","Data":"0e0ab69dd6fadaaf1295f81a6d9010207c24862ab74ebb9d59037e0435c21992"} Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.106242 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.196468 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-combined-ca-bundle\") pod \"d5f47300-df0d-451f-bc80-feec784391ec\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.196633 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jhx2\" (UniqueName: \"kubernetes.io/projected/d5f47300-df0d-451f-bc80-feec784391ec-kube-api-access-7jhx2\") pod \"d5f47300-df0d-451f-bc80-feec784391ec\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.196655 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-scripts\") pod \"d5f47300-df0d-451f-bc80-feec784391ec\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.197307 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-config-data\") pod \"d5f47300-df0d-451f-bc80-feec784391ec\" (UID: \"d5f47300-df0d-451f-bc80-feec784391ec\") " Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.202310 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-scripts" (OuterVolumeSpecName: "scripts") pod "d5f47300-df0d-451f-bc80-feec784391ec" (UID: "d5f47300-df0d-451f-bc80-feec784391ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.209446 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f47300-df0d-451f-bc80-feec784391ec-kube-api-access-7jhx2" (OuterVolumeSpecName: "kube-api-access-7jhx2") pod "d5f47300-df0d-451f-bc80-feec784391ec" (UID: "d5f47300-df0d-451f-bc80-feec784391ec"). InnerVolumeSpecName "kube-api-access-7jhx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.228721 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5f47300-df0d-451f-bc80-feec784391ec" (UID: "d5f47300-df0d-451f-bc80-feec784391ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.228851 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-config-data" (OuterVolumeSpecName: "config-data") pod "d5f47300-df0d-451f-bc80-feec784391ec" (UID: "d5f47300-df0d-451f-bc80-feec784391ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.299753 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.299797 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jhx2\" (UniqueName: \"kubernetes.io/projected/d5f47300-df0d-451f-bc80-feec784391ec-kube-api-access-7jhx2\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.299812 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.299824 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f47300-df0d-451f-bc80-feec784391ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.778556 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" event={"ID":"d5f47300-df0d-451f-bc80-feec784391ec","Type":"ContainerDied","Data":"80efef6b15f286abc56a67351b4e09922577ac022ba79bc04f679b5638ef0c47"} Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.779364 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80efef6b15f286abc56a67351b4e09922577ac022ba79bc04f679b5638ef0c47" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.778569 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k8rzm" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.864465 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 07:20:32 crc kubenswrapper[4909]: E1126 07:20:32.865120 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f47300-df0d-451f-bc80-feec784391ec" containerName="nova-cell0-conductor-db-sync" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.865147 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f47300-df0d-451f-bc80-feec784391ec" containerName="nova-cell0-conductor-db-sync" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.865482 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f47300-df0d-451f-bc80-feec784391ec" containerName="nova-cell0-conductor-db-sync" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.866516 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.868705 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.870777 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5zvnt" Nov 26 07:20:32 crc kubenswrapper[4909]: I1126 07:20:32.881808 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.010370 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvn6h\" (UniqueName: \"kubernetes.io/projected/0d2c4878-7f21-469c-b19b-c76f335e9e75-kube-api-access-jvn6h\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.010512 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.010567 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.112143 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvn6h\" (UniqueName: \"kubernetes.io/projected/0d2c4878-7f21-469c-b19b-c76f335e9e75-kube-api-access-jvn6h\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.112257 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.112314 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.116649 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.122193 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.137268 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvn6h\" (UniqueName: \"kubernetes.io/projected/0d2c4878-7f21-469c-b19b-c76f335e9e75-kube-api-access-jvn6h\") pod \"nova-cell0-conductor-0\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.184160 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: W1126 07:20:33.615790 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d2c4878_7f21_469c_b19b_c76f335e9e75.slice/crio-a0b6335d54b72bb27be284681cef069817c3a32b984a7c558722b4bb97e568bb WatchSource:0}: Error finding container a0b6335d54b72bb27be284681cef069817c3a32b984a7c558722b4bb97e568bb: Status 404 returned error can't find the container with id a0b6335d54b72bb27be284681cef069817c3a32b984a7c558722b4bb97e568bb Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.632742 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.789857 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0d2c4878-7f21-469c-b19b-c76f335e9e75","Type":"ContainerStarted","Data":"a0b6335d54b72bb27be284681cef069817c3a32b984a7c558722b4bb97e568bb"} Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.790650 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:33 crc kubenswrapper[4909]: I1126 07:20:33.812625 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.8125864200000001 podStartE2EDuration="1.81258642s" podCreationTimestamp="2025-11-26 07:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:33.804611917 +0000 UTC m=+1205.950823103" watchObservedRunningTime="2025-11-26 07:20:33.81258642 +0000 UTC m=+1205.958797586" Nov 26 07:20:34 crc kubenswrapper[4909]: I1126 07:20:34.802081 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0d2c4878-7f21-469c-b19b-c76f335e9e75","Type":"ContainerStarted","Data":"000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc"} Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.212048 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.661302 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2zb5p"] Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.662780 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.665332 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.665332 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.675373 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zb5p"] Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.737029 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-scripts\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.737279 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-config-data\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.737342 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.737388 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-kube-api-access-2bp7p\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.799205 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.800987 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.803542 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.812812 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.843630 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-config-data\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.843718 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.843771 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-kube-api-access-2bp7p\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.843883 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-scripts\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.851574 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-config-data\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.862210 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-scripts\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.862810 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.892630 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-kube-api-access-2bp7p\") pod \"nova-cell0-cell-mapping-2zb5p\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.900929 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.910173 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.926454 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.927301 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.949835 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.949878 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7d8\" (UniqueName: \"kubernetes.io/projected/ac655ffd-a946-4bf2-90e8-242851bb6dca-kube-api-access-xw7d8\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.949958 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-config-data\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.956570 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.958015 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:38 crc kubenswrapper[4909]: I1126 07:20:38.981910 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.007011 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.034053 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056564 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c829k\" (UniqueName: \"kubernetes.io/projected/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-kube-api-access-c829k\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056627 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056686 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056702 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7d8\" (UniqueName: \"kubernetes.io/projected/ac655ffd-a946-4bf2-90e8-242851bb6dca-kube-api-access-xw7d8\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056745 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056774 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-config-data\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056810 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldl4b\" (UniqueName: \"kubernetes.io/projected/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-kube-api-access-ldl4b\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056843 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056884 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-config-data\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.056901 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-logs\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.087236 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.100680 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-config-data\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.129377 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7d8\" (UniqueName: \"kubernetes.io/projected/ac655ffd-a946-4bf2-90e8-242851bb6dca-kube-api-access-xw7d8\") pod \"nova-scheduler-0\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.161545 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.161632 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-config-data\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.161652 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-logs\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.161679 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c829k\" (UniqueName: \"kubernetes.io/projected/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-kube-api-access-c829k\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.161701 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.161771 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.161821 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldl4b\" (UniqueName: \"kubernetes.io/projected/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-kube-api-access-ldl4b\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.167361 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-logs\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.184304 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.189067 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-config-data\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.190150 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.191201 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.192217 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldl4b\" (UniqueName: \"kubernetes.io/projected/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-kube-api-access-ldl4b\") pod \"nova-api-0\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.223422 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.247322 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.252042 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c829k\" (UniqueName: \"kubernetes.io/projected/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-kube-api-access-c829k\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.277004 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.286575 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.287747 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.362566 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.383093 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-logs\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.383211 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-config-data\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.383383 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.383720 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfznd\" (UniqueName: \"kubernetes.io/projected/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-kube-api-access-bfznd\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.387456 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4nnzg"] Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.389019 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.422017 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.441766 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4nnzg"] Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490041 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-logs\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490109 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-config-data\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490132 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490199 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfznd\" (UniqueName: \"kubernetes.io/projected/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-kube-api-access-bfznd\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490255 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7k4z\" (UniqueName: \"kubernetes.io/projected/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-kube-api-access-m7k4z\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490314 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490335 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-config\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490390 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490437 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.490457 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.510238 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-logs\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.520515 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.520577 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-config-data\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.521172 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfznd\" (UniqueName: \"kubernetes.io/projected/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-kube-api-access-bfznd\") pod \"nova-metadata-0\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.591778 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7k4z\" (UniqueName: \"kubernetes.io/projected/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-kube-api-access-m7k4z\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.591845 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.591866 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-config\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.591914 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.591946 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.591966 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.592898 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-config\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.596528 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.596872 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.601104 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.604422 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.620498 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7k4z\" (UniqueName: \"kubernetes.io/projected/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-kube-api-access-m7k4z\") pod \"dnsmasq-dns-845d6d6f59-4nnzg\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.635362 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.724746 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:39 crc kubenswrapper[4909]: I1126 07:20:39.826047 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zb5p"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.007372 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.107689 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jlct9"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.109080 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.116016 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.116227 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.124138 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jlct9"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.155661 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.165300 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.210329 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpps\" (UniqueName: \"kubernetes.io/projected/602f4606-6ad9-4358-935e-b4dcc0282e50-kube-api-access-5bpps\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.210461 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-scripts\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.210615 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.212190 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-config-data\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.315545 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bpps\" (UniqueName: \"kubernetes.io/projected/602f4606-6ad9-4358-935e-b4dcc0282e50-kube-api-access-5bpps\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.315658 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-scripts\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.315762 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.316713 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-config-data\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.321177 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-config-data\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.321886 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-scripts\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.323393 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.327062 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.333455 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bpps\" (UniqueName: \"kubernetes.io/projected/602f4606-6ad9-4358-935e-b4dcc0282e50-kube-api-access-5bpps\") pod \"nova-cell1-conductor-db-sync-jlct9\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.472455 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4nnzg"] Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.548951 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.898983 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zb5p" event={"ID":"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8","Type":"ContainerStarted","Data":"c340f71bc1c6c25a928c8f228589f2df2d196a7a03b5b16c0c1846e902a918ac"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.899540 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zb5p" event={"ID":"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8","Type":"ContainerStarted","Data":"46013bbe649575c302d899466388569783b978b4f671e4a26fb1dbda0db3e7fd"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.900168 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"692dfe86-909b-41d9-bcf2-19ed88b3b9cb","Type":"ContainerStarted","Data":"f24f187a66189109f73298b28149997aa4f1debc6a24d54c4b925ab7d7c5bb1c"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.902401 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac655ffd-a946-4bf2-90e8-242851bb6dca","Type":"ContainerStarted","Data":"14c074ddb8f3c2c293052fe387f6230b01c2a53bea56b6615c3e8de6e8a5ff78"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.910191 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90","Type":"ContainerStarted","Data":"72d875b3e75c2b99c0634a564d755deadd468c80550525fd32b1db8bacbbac35"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.919242 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4","Type":"ContainerStarted","Data":"d1f8b3d339c5c62f79fb7ffe33ef74c4a6cd2a038f009b501546ddacf90038ed"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.923657 4909 generic.go:334] "Generic (PLEG): container finished" podID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerID="ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21" exitCode=0 Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.923703 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" event={"ID":"a92c7a14-df9b-4a85-a8dd-a8039a2cb928","Type":"ContainerDied","Data":"ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.923730 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" event={"ID":"a92c7a14-df9b-4a85-a8dd-a8039a2cb928","Type":"ContainerStarted","Data":"bc3251fd16351eb9a98997b1c36af5e33751700f34be8317be284de25cd3d029"} Nov 26 07:20:40 crc kubenswrapper[4909]: I1126 07:20:40.963961 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2zb5p" podStartSLOduration=2.963939579 podStartE2EDuration="2.963939579s" podCreationTimestamp="2025-11-26 07:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:40.917271162 +0000 UTC m=+1213.063482338" watchObservedRunningTime="2025-11-26 07:20:40.963939579 +0000 UTC m=+1213.110150745" Nov 26 07:20:41 crc kubenswrapper[4909]: I1126 07:20:41.020194 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jlct9"] Nov 26 07:20:41 crc kubenswrapper[4909]: I1126 07:20:41.949232 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" event={"ID":"a92c7a14-df9b-4a85-a8dd-a8039a2cb928","Type":"ContainerStarted","Data":"5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b"} Nov 26 07:20:41 crc kubenswrapper[4909]: I1126 07:20:41.951028 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:41 crc kubenswrapper[4909]: I1126 07:20:41.955760 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jlct9" event={"ID":"602f4606-6ad9-4358-935e-b4dcc0282e50","Type":"ContainerStarted","Data":"4add6d1b0117e9447b11bea0e6b55ba54c459b24effc28c5fb858469b66444e8"} Nov 26 07:20:41 crc kubenswrapper[4909]: I1126 07:20:41.955936 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jlct9" event={"ID":"602f4606-6ad9-4358-935e-b4dcc0282e50","Type":"ContainerStarted","Data":"2485d20d3b0beede24da0c9a436ecff79c38ee034e8c6e584e1ac4b486d18d0e"} Nov 26 07:20:41 crc kubenswrapper[4909]: I1126 07:20:41.973363 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" podStartSLOduration=2.973346432 podStartE2EDuration="2.973346432s" podCreationTimestamp="2025-11-26 07:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:41.972518679 +0000 UTC m=+1214.118729845" watchObservedRunningTime="2025-11-26 07:20:41.973346432 +0000 UTC m=+1214.119557598" Nov 26 07:20:41 crc kubenswrapper[4909]: I1126 07:20:41.992567 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jlct9" podStartSLOduration=1.992547179 podStartE2EDuration="1.992547179s" podCreationTimestamp="2025-11-26 07:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:41.988881347 +0000 UTC m=+1214.135092513" watchObservedRunningTime="2025-11-26 07:20:41.992547179 +0000 UTC m=+1214.138758345" Nov 26 07:20:42 crc kubenswrapper[4909]: I1126 07:20:42.765061 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:20:42 crc kubenswrapper[4909]: I1126 07:20:42.778611 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.981388 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90","Type":"ContainerStarted","Data":"76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5"} Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.981998 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5" gracePeriod=30 Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.986731 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4","Type":"ContainerStarted","Data":"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c"} Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.986815 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-log" containerID="cri-o://25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538" gracePeriod=30 Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.986843 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4","Type":"ContainerStarted","Data":"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538"} Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.986943 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-metadata" containerID="cri-o://187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c" gracePeriod=30 Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.997888 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"692dfe86-909b-41d9-bcf2-19ed88b3b9cb","Type":"ContainerStarted","Data":"035a95b7b61112716739cee417c7721c0752de1e7471414dd007c9c6151254a7"} Nov 26 07:20:44 crc kubenswrapper[4909]: I1126 07:20:44.997926 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"692dfe86-909b-41d9-bcf2-19ed88b3b9cb","Type":"ContainerStarted","Data":"a34c38ba223c2da590058140a9a5113c8d320f00acffe03647c7b7617e414d9b"} Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.017794 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac655ffd-a946-4bf2-90e8-242851bb6dca","Type":"ContainerStarted","Data":"dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d"} Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.021807 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.477631426 podStartE2EDuration="7.021785768s" podCreationTimestamp="2025-11-26 07:20:38 +0000 UTC" firstStartedPulling="2025-11-26 07:20:40.1723069 +0000 UTC m=+1212.318518066" lastFinishedPulling="2025-11-26 07:20:43.716461242 +0000 UTC m=+1215.862672408" observedRunningTime="2025-11-26 07:20:45.008682252 +0000 UTC m=+1217.154893498" watchObservedRunningTime="2025-11-26 07:20:45.021785768 +0000 UTC m=+1217.167996944" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.035094 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.65417395 podStartE2EDuration="6.035078011s" podCreationTimestamp="2025-11-26 07:20:39 +0000 UTC" firstStartedPulling="2025-11-26 07:20:40.332753462 +0000 UTC m=+1212.478964628" lastFinishedPulling="2025-11-26 07:20:43.713657523 +0000 UTC m=+1215.859868689" observedRunningTime="2025-11-26 07:20:45.033820716 +0000 UTC m=+1217.180031892" watchObservedRunningTime="2025-11-26 07:20:45.035078011 +0000 UTC m=+1217.181289187" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.060206 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.545884657 podStartE2EDuration="7.060175623s" podCreationTimestamp="2025-11-26 07:20:38 +0000 UTC" firstStartedPulling="2025-11-26 07:20:40.199375728 +0000 UTC m=+1212.345586894" lastFinishedPulling="2025-11-26 07:20:43.713666694 +0000 UTC m=+1215.859877860" observedRunningTime="2025-11-26 07:20:45.056034788 +0000 UTC m=+1217.202245964" watchObservedRunningTime="2025-11-26 07:20:45.060175623 +0000 UTC m=+1217.206386829" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.077898 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.395116446 podStartE2EDuration="7.077879168s" podCreationTimestamp="2025-11-26 07:20:38 +0000 UTC" firstStartedPulling="2025-11-26 07:20:40.028933337 +0000 UTC m=+1212.175144503" lastFinishedPulling="2025-11-26 07:20:43.711696059 +0000 UTC m=+1215.857907225" observedRunningTime="2025-11-26 07:20:45.07507739 +0000 UTC m=+1217.221288566" watchObservedRunningTime="2025-11-26 07:20:45.077879168 +0000 UTC m=+1217.224090344" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.579046 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.732976 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-config-data\") pod \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.733308 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-combined-ca-bundle\") pod \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.733455 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfznd\" (UniqueName: \"kubernetes.io/projected/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-kube-api-access-bfznd\") pod \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.733749 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-logs\") pod \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\" (UID: \"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4\") " Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.734311 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-logs" (OuterVolumeSpecName: "logs") pod "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" (UID: "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.734808 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.752706 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-kube-api-access-bfznd" (OuterVolumeSpecName: "kube-api-access-bfznd") pod "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" (UID: "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4"). InnerVolumeSpecName "kube-api-access-bfznd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.780185 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" (UID: "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.791741 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-config-data" (OuterVolumeSpecName: "config-data") pod "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" (UID: "e7788c71-5eea-4319-a0d1-e7c1f5d30cb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.836655 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfznd\" (UniqueName: \"kubernetes.io/projected/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-kube-api-access-bfznd\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.836695 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.836710 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:45 crc kubenswrapper[4909]: I1126 07:20:45.995302 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.033301 4909 generic.go:334] "Generic (PLEG): container finished" podID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerID="187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c" exitCode=0 Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.033359 4909 generic.go:334] "Generic (PLEG): container finished" podID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerID="25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538" exitCode=143 Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.033376 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4","Type":"ContainerDied","Data":"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c"} Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.033456 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4","Type":"ContainerDied","Data":"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538"} Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.033502 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7788c71-5eea-4319-a0d1-e7c1f5d30cb4","Type":"ContainerDied","Data":"d1f8b3d339c5c62f79fb7ffe33ef74c4a6cd2a038f009b501546ddacf90038ed"} Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.033530 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.033537 4909 scope.go:117] "RemoveContainer" containerID="187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.069826 4909 scope.go:117] "RemoveContainer" containerID="25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.093802 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.105730 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.113641 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:46 crc kubenswrapper[4909]: E1126 07:20:46.114055 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-metadata" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.114080 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-metadata" Nov 26 07:20:46 crc kubenswrapper[4909]: E1126 07:20:46.114099 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-log" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.114106 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-log" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.114297 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-metadata" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.114321 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" containerName="nova-metadata-log" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.115294 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.117854 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.122198 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.135789 4909 scope.go:117] "RemoveContainer" containerID="187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c" Nov 26 07:20:46 crc kubenswrapper[4909]: E1126 07:20:46.138952 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c\": container with ID starting with 187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c not found: ID does not exist" containerID="187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.138992 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c"} err="failed to get container status \"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c\": rpc error: code = NotFound desc = could not find container \"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c\": container with ID starting with 187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c not found: ID does not exist" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.139016 4909 scope.go:117] "RemoveContainer" containerID="25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.140316 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:46 crc kubenswrapper[4909]: E1126 07:20:46.140314 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538\": container with ID starting with 25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538 not found: ID does not exist" containerID="25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.140403 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538"} err="failed to get container status \"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538\": rpc error: code = NotFound desc = could not find container \"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538\": container with ID starting with 25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538 not found: ID does not exist" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.140483 4909 scope.go:117] "RemoveContainer" containerID="187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.140783 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c"} err="failed to get container status \"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c\": rpc error: code = NotFound desc = could not find container \"187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c\": container with ID starting with 187e9eac78712d7a3a8c91bd6143dc4af5058ca018c4b952e9be59d512965a2c not found: ID does not exist" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.140825 4909 scope.go:117] "RemoveContainer" containerID="25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.143189 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538"} err="failed to get container status \"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538\": rpc error: code = NotFound desc = could not find container \"25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538\": container with ID starting with 25535de58105690fd57bec7e9f8b0866758b45ce20046cea7bef9b2765bac538 not found: ID does not exist" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.244224 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-config-data\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.244276 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960be784-6802-4653-b5bf-df571b61dd8a-logs\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.244319 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.244496 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtd8g\" (UniqueName: \"kubernetes.io/projected/960be784-6802-4653-b5bf-df571b61dd8a-kube-api-access-dtd8g\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.244786 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.346766 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.347110 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-config-data\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.347230 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960be784-6802-4653-b5bf-df571b61dd8a-logs\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.347644 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960be784-6802-4653-b5bf-df571b61dd8a-logs\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.347917 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.348352 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtd8g\" (UniqueName: \"kubernetes.io/projected/960be784-6802-4653-b5bf-df571b61dd8a-kube-api-access-dtd8g\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.351009 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.351940 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.353142 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-config-data\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.376451 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtd8g\" (UniqueName: \"kubernetes.io/projected/960be784-6802-4653-b5bf-df571b61dd8a-kube-api-access-dtd8g\") pod \"nova-metadata-0\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.449452 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.512316 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7788c71-5eea-4319-a0d1-e7c1f5d30cb4" path="/var/lib/kubelet/pods/e7788c71-5eea-4319-a0d1-e7c1f5d30cb4/volumes" Nov 26 07:20:46 crc kubenswrapper[4909]: W1126 07:20:46.917980 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod960be784_6802_4653_b5bf_df571b61dd8a.slice/crio-ad826f314a417fbd83d4aa43151f27fab1d939a8fa2aee5318043b7b84611a0b WatchSource:0}: Error finding container ad826f314a417fbd83d4aa43151f27fab1d939a8fa2aee5318043b7b84611a0b: Status 404 returned error can't find the container with id ad826f314a417fbd83d4aa43151f27fab1d939a8fa2aee5318043b7b84611a0b Nov 26 07:20:46 crc kubenswrapper[4909]: I1126 07:20:46.918605 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:47 crc kubenswrapper[4909]: I1126 07:20:47.046463 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"960be784-6802-4653-b5bf-df571b61dd8a","Type":"ContainerStarted","Data":"ad826f314a417fbd83d4aa43151f27fab1d939a8fa2aee5318043b7b84611a0b"} Nov 26 07:20:48 crc kubenswrapper[4909]: I1126 07:20:48.065698 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"960be784-6802-4653-b5bf-df571b61dd8a","Type":"ContainerStarted","Data":"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8"} Nov 26 07:20:48 crc kubenswrapper[4909]: I1126 07:20:48.066000 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"960be784-6802-4653-b5bf-df571b61dd8a","Type":"ContainerStarted","Data":"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec"} Nov 26 07:20:48 crc kubenswrapper[4909]: I1126 07:20:48.088968 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.088951719 podStartE2EDuration="2.088951719s" podCreationTimestamp="2025-11-26 07:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:48.084874176 +0000 UTC m=+1220.231085352" watchObservedRunningTime="2025-11-26 07:20:48.088951719 +0000 UTC m=+1220.235162885" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.075976 4909 generic.go:334] "Generic (PLEG): container finished" podID="602f4606-6ad9-4358-935e-b4dcc0282e50" containerID="4add6d1b0117e9447b11bea0e6b55ba54c459b24effc28c5fb858469b66444e8" exitCode=0 Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.076024 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jlct9" event={"ID":"602f4606-6ad9-4358-935e-b4dcc0282e50","Type":"ContainerDied","Data":"4add6d1b0117e9447b11bea0e6b55ba54c459b24effc28c5fb858469b66444e8"} Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.078980 4909 generic.go:334] "Generic (PLEG): container finished" podID="b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" containerID="c340f71bc1c6c25a928c8f228589f2df2d196a7a03b5b16c0c1846e902a918ac" exitCode=0 Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.079804 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zb5p" event={"ID":"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8","Type":"ContainerDied","Data":"c340f71bc1c6c25a928c8f228589f2df2d196a7a03b5b16c0c1846e902a918ac"} Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.287995 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.288051 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.364768 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.422286 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.423419 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.470823 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.726636 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.822073 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-brwhf"] Nov 26 07:20:49 crc kubenswrapper[4909]: I1126 07:20:49.822304 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" podUID="59b07954-e19a-4f32-af95-f1e1de784683" containerName="dnsmasq-dns" containerID="cri-o://247b342ffc64904df65f2383aee5ca8a0188634c09e09fc107fb61f43ce0f4b1" gracePeriod=10 Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.101641 4909 generic.go:334] "Generic (PLEG): container finished" podID="59b07954-e19a-4f32-af95-f1e1de784683" containerID="247b342ffc64904df65f2383aee5ca8a0188634c09e09fc107fb61f43ce0f4b1" exitCode=0 Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.102032 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" event={"ID":"59b07954-e19a-4f32-af95-f1e1de784683","Type":"ContainerDied","Data":"247b342ffc64904df65f2383aee5ca8a0188634c09e09fc107fb61f43ce0f4b1"} Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.177877 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.371517 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.372705 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.511353 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.664301 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-svc\") pod \"59b07954-e19a-4f32-af95-f1e1de784683\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.664690 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-sb\") pod \"59b07954-e19a-4f32-af95-f1e1de784683\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.665008 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhtn9\" (UniqueName: \"kubernetes.io/projected/59b07954-e19a-4f32-af95-f1e1de784683-kube-api-access-nhtn9\") pod \"59b07954-e19a-4f32-af95-f1e1de784683\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.665054 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-swift-storage-0\") pod \"59b07954-e19a-4f32-af95-f1e1de784683\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.665885 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-nb\") pod \"59b07954-e19a-4f32-af95-f1e1de784683\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.665952 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-config\") pod \"59b07954-e19a-4f32-af95-f1e1de784683\" (UID: \"59b07954-e19a-4f32-af95-f1e1de784683\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.672296 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b07954-e19a-4f32-af95-f1e1de784683-kube-api-access-nhtn9" (OuterVolumeSpecName: "kube-api-access-nhtn9") pod "59b07954-e19a-4f32-af95-f1e1de784683" (UID: "59b07954-e19a-4f32-af95-f1e1de784683"). InnerVolumeSpecName "kube-api-access-nhtn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.675090 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.693709 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.727559 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-config" (OuterVolumeSpecName: "config") pod "59b07954-e19a-4f32-af95-f1e1de784683" (UID: "59b07954-e19a-4f32-af95-f1e1de784683"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.733174 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59b07954-e19a-4f32-af95-f1e1de784683" (UID: "59b07954-e19a-4f32-af95-f1e1de784683"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.759519 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59b07954-e19a-4f32-af95-f1e1de784683" (UID: "59b07954-e19a-4f32-af95-f1e1de784683"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.770331 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-combined-ca-bundle\") pod \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.770420 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-config-data\") pod \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.770480 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-scripts\") pod \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.770575 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-kube-api-access-2bp7p\") pod \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\" (UID: \"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.771128 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhtn9\" (UniqueName: \"kubernetes.io/projected/59b07954-e19a-4f32-af95-f1e1de784683-kube-api-access-nhtn9\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.771145 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.771154 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.771168 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.805325 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-kube-api-access-2bp7p" (OuterVolumeSpecName: "kube-api-access-2bp7p") pod "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" (UID: "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8"). InnerVolumeSpecName "kube-api-access-2bp7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.805420 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-scripts" (OuterVolumeSpecName: "scripts") pod "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" (UID: "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.805580 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59b07954-e19a-4f32-af95-f1e1de784683" (UID: "59b07954-e19a-4f32-af95-f1e1de784683"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.824493 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" (UID: "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.837513 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-config-data" (OuterVolumeSpecName: "config-data") pod "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" (UID: "b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.847696 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59b07954-e19a-4f32-af95-f1e1de784683" (UID: "59b07954-e19a-4f32-af95-f1e1de784683"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.847934 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.848117 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b87acf53-c499-4454-b417-a54a78973b10" containerName="kube-state-metrics" containerID="cri-o://81887acfac475fba90044a7146243a1f3bbe2dbd1f0f3e8865736148b105cdd0" gracePeriod=30 Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.871925 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bpps\" (UniqueName: \"kubernetes.io/projected/602f4606-6ad9-4358-935e-b4dcc0282e50-kube-api-access-5bpps\") pod \"602f4606-6ad9-4358-935e-b4dcc0282e50\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.871992 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-combined-ca-bundle\") pod \"602f4606-6ad9-4358-935e-b4dcc0282e50\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872111 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-config-data\") pod \"602f4606-6ad9-4358-935e-b4dcc0282e50\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872156 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-scripts\") pod \"602f4606-6ad9-4358-935e-b4dcc0282e50\" (UID: \"602f4606-6ad9-4358-935e-b4dcc0282e50\") " Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872510 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872528 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872538 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872549 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-kube-api-access-2bp7p\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872557 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.872565 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b07954-e19a-4f32-af95-f1e1de784683-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.879739 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602f4606-6ad9-4358-935e-b4dcc0282e50-kube-api-access-5bpps" (OuterVolumeSpecName: "kube-api-access-5bpps") pod "602f4606-6ad9-4358-935e-b4dcc0282e50" (UID: "602f4606-6ad9-4358-935e-b4dcc0282e50"). InnerVolumeSpecName "kube-api-access-5bpps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.879828 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-scripts" (OuterVolumeSpecName: "scripts") pod "602f4606-6ad9-4358-935e-b4dcc0282e50" (UID: "602f4606-6ad9-4358-935e-b4dcc0282e50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.912340 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "602f4606-6ad9-4358-935e-b4dcc0282e50" (UID: "602f4606-6ad9-4358-935e-b4dcc0282e50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.912377 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-config-data" (OuterVolumeSpecName: "config-data") pod "602f4606-6ad9-4358-935e-b4dcc0282e50" (UID: "602f4606-6ad9-4358-935e-b4dcc0282e50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.973717 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bpps\" (UniqueName: \"kubernetes.io/projected/602f4606-6ad9-4358-935e-b4dcc0282e50-kube-api-access-5bpps\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.973754 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.973765 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:50 crc kubenswrapper[4909]: I1126 07:20:50.973774 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4606-6ad9-4358-935e-b4dcc0282e50-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.114045 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zb5p" event={"ID":"b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8","Type":"ContainerDied","Data":"46013bbe649575c302d899466388569783b978b4f671e4a26fb1dbda0db3e7fd"} Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.114089 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46013bbe649575c302d899466388569783b978b4f671e4a26fb1dbda0db3e7fd" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.114166 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zb5p" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.136854 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" event={"ID":"59b07954-e19a-4f32-af95-f1e1de784683","Type":"ContainerDied","Data":"ec41fca4e5fe6c058ec27f30cee66a7ed8cbab3198e1a5f1f852d0e9a18574e8"} Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.136887 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-brwhf" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.136900 4909 scope.go:117] "RemoveContainer" containerID="247b342ffc64904df65f2383aee5ca8a0188634c09e09fc107fb61f43ce0f4b1" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.138964 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jlct9" event={"ID":"602f4606-6ad9-4358-935e-b4dcc0282e50","Type":"ContainerDied","Data":"2485d20d3b0beede24da0c9a436ecff79c38ee034e8c6e584e1ac4b486d18d0e"} Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.138995 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2485d20d3b0beede24da0c9a436ecff79c38ee034e8c6e584e1ac4b486d18d0e" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.139055 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jlct9" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.151363 4909 generic.go:334] "Generic (PLEG): container finished" podID="b87acf53-c499-4454-b417-a54a78973b10" containerID="81887acfac475fba90044a7146243a1f3bbe2dbd1f0f3e8865736148b105cdd0" exitCode=2 Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.151636 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b87acf53-c499-4454-b417-a54a78973b10","Type":"ContainerDied","Data":"81887acfac475fba90044a7146243a1f3bbe2dbd1f0f3e8865736148b105cdd0"} Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.160018 4909 scope.go:117] "RemoveContainer" containerID="d892db28243a047dd6234031566953da43ace32bce833effe48bc4f81f8c8ca6" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177180 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 07:20:51 crc kubenswrapper[4909]: E1126 07:20:51.177573 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b07954-e19a-4f32-af95-f1e1de784683" containerName="init" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177650 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b07954-e19a-4f32-af95-f1e1de784683" containerName="init" Nov 26 07:20:51 crc kubenswrapper[4909]: E1126 07:20:51.177661 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602f4606-6ad9-4358-935e-b4dcc0282e50" containerName="nova-cell1-conductor-db-sync" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177667 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="602f4606-6ad9-4358-935e-b4dcc0282e50" containerName="nova-cell1-conductor-db-sync" Nov 26 07:20:51 crc kubenswrapper[4909]: E1126 07:20:51.177680 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b07954-e19a-4f32-af95-f1e1de784683" containerName="dnsmasq-dns" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177686 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b07954-e19a-4f32-af95-f1e1de784683" containerName="dnsmasq-dns" Nov 26 07:20:51 crc kubenswrapper[4909]: E1126 07:20:51.177709 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" containerName="nova-manage" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177716 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" containerName="nova-manage" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177890 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="602f4606-6ad9-4358-935e-b4dcc0282e50" containerName="nova-cell1-conductor-db-sync" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177898 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b07954-e19a-4f32-af95-f1e1de784683" containerName="dnsmasq-dns" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.177916 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" containerName="nova-manage" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.178922 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.183719 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.203960 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-brwhf"] Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.220636 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.253777 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-brwhf"] Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.282481 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzmbf\" (UniqueName: \"kubernetes.io/projected/2c6b5670-38ee-4d52-af67-1e187962d73d-kube-api-access-xzmbf\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.282574 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.282632 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.380120 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.380339 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-log" containerID="cri-o://a34c38ba223c2da590058140a9a5113c8d320f00acffe03647c7b7617e414d9b" gracePeriod=30 Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.380736 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-api" containerID="cri-o://035a95b7b61112716739cee417c7721c0752de1e7471414dd007c9c6151254a7" gracePeriod=30 Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.384865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzmbf\" (UniqueName: \"kubernetes.io/projected/2c6b5670-38ee-4d52-af67-1e187962d73d-kube-api-access-xzmbf\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.384917 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.384957 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.390996 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.392041 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.393673 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.409919 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.410157 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-log" containerID="cri-o://1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec" gracePeriod=30 Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.410480 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-metadata" containerID="cri-o://d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8" gracePeriod=30 Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.422696 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzmbf\" (UniqueName: \"kubernetes.io/projected/2c6b5670-38ee-4d52-af67-1e187962d73d-kube-api-access-xzmbf\") pod \"nova-cell1-conductor-0\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.444846 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.449898 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.449945 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.550220 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.591997 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr2kz\" (UniqueName: \"kubernetes.io/projected/b87acf53-c499-4454-b417-a54a78973b10-kube-api-access-wr2kz\") pod \"b87acf53-c499-4454-b417-a54a78973b10\" (UID: \"b87acf53-c499-4454-b417-a54a78973b10\") " Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.596721 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87acf53-c499-4454-b417-a54a78973b10-kube-api-access-wr2kz" (OuterVolumeSpecName: "kube-api-access-wr2kz") pod "b87acf53-c499-4454-b417-a54a78973b10" (UID: "b87acf53-c499-4454-b417-a54a78973b10"). InnerVolumeSpecName "kube-api-access-wr2kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.698796 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr2kz\" (UniqueName: \"kubernetes.io/projected/b87acf53-c499-4454-b417-a54a78973b10-kube-api-access-wr2kz\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:51 crc kubenswrapper[4909]: I1126 07:20:51.892127 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.005637 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960be784-6802-4653-b5bf-df571b61dd8a-logs\") pod \"960be784-6802-4653-b5bf-df571b61dd8a\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.005718 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-config-data\") pod \"960be784-6802-4653-b5bf-df571b61dd8a\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.005789 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtd8g\" (UniqueName: \"kubernetes.io/projected/960be784-6802-4653-b5bf-df571b61dd8a-kube-api-access-dtd8g\") pod \"960be784-6802-4653-b5bf-df571b61dd8a\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.005811 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-nova-metadata-tls-certs\") pod \"960be784-6802-4653-b5bf-df571b61dd8a\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.005925 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-combined-ca-bundle\") pod \"960be784-6802-4653-b5bf-df571b61dd8a\" (UID: \"960be784-6802-4653-b5bf-df571b61dd8a\") " Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.006606 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/960be784-6802-4653-b5bf-df571b61dd8a-logs" (OuterVolumeSpecName: "logs") pod "960be784-6802-4653-b5bf-df571b61dd8a" (UID: "960be784-6802-4653-b5bf-df571b61dd8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.010713 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960be784-6802-4653-b5bf-df571b61dd8a-kube-api-access-dtd8g" (OuterVolumeSpecName: "kube-api-access-dtd8g") pod "960be784-6802-4653-b5bf-df571b61dd8a" (UID: "960be784-6802-4653-b5bf-df571b61dd8a"). InnerVolumeSpecName "kube-api-access-dtd8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.037679 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-config-data" (OuterVolumeSpecName: "config-data") pod "960be784-6802-4653-b5bf-df571b61dd8a" (UID: "960be784-6802-4653-b5bf-df571b61dd8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.050645 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "960be784-6802-4653-b5bf-df571b61dd8a" (UID: "960be784-6802-4653-b5bf-df571b61dd8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.069304 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "960be784-6802-4653-b5bf-df571b61dd8a" (UID: "960be784-6802-4653-b5bf-df571b61dd8a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.108474 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtd8g\" (UniqueName: \"kubernetes.io/projected/960be784-6802-4653-b5bf-df571b61dd8a-kube-api-access-dtd8g\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.108509 4909 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.108523 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.108536 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960be784-6802-4653-b5bf-df571b61dd8a-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.108549 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be784-6802-4653-b5bf-df571b61dd8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.122264 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.170973 4909 generic.go:334] "Generic (PLEG): container finished" podID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerID="a34c38ba223c2da590058140a9a5113c8d320f00acffe03647c7b7617e414d9b" exitCode=143 Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.171051 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"692dfe86-909b-41d9-bcf2-19ed88b3b9cb","Type":"ContainerDied","Data":"a34c38ba223c2da590058140a9a5113c8d320f00acffe03647c7b7617e414d9b"} Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.173111 4909 generic.go:334] "Generic (PLEG): container finished" podID="960be784-6802-4653-b5bf-df571b61dd8a" containerID="d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8" exitCode=0 Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.173139 4909 generic.go:334] "Generic (PLEG): container finished" podID="960be784-6802-4653-b5bf-df571b61dd8a" containerID="1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec" exitCode=143 Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.173172 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.173190 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"960be784-6802-4653-b5bf-df571b61dd8a","Type":"ContainerDied","Data":"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8"} Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.173218 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"960be784-6802-4653-b5bf-df571b61dd8a","Type":"ContainerDied","Data":"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec"} Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.173230 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"960be784-6802-4653-b5bf-df571b61dd8a","Type":"ContainerDied","Data":"ad826f314a417fbd83d4aa43151f27fab1d939a8fa2aee5318043b7b84611a0b"} Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.173244 4909 scope.go:117] "RemoveContainer" containerID="d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.179405 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b87acf53-c499-4454-b417-a54a78973b10","Type":"ContainerDied","Data":"213fe145ccfadfc8f881b37ad349fec8de7c26712ef19e20bafc37e9578b95bf"} Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.179502 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.190102 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2c6b5670-38ee-4d52-af67-1e187962d73d","Type":"ContainerStarted","Data":"dd9a82bbcf39c1326fd29560bf5228a4882b5b65ff10e9309b763382bf5c3797"} Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.219966 4909 scope.go:117] "RemoveContainer" containerID="1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.250373 4909 scope.go:117] "RemoveContainer" containerID="d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8" Nov 26 07:20:52 crc kubenswrapper[4909]: E1126 07:20:52.251013 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8\": container with ID starting with d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8 not found: ID does not exist" containerID="d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.251071 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8"} err="failed to get container status \"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8\": rpc error: code = NotFound desc = could not find container \"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8\": container with ID starting with d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8 not found: ID does not exist" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.251101 4909 scope.go:117] "RemoveContainer" containerID="1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec" Nov 26 07:20:52 crc kubenswrapper[4909]: E1126 07:20:52.251753 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec\": container with ID starting with 1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec not found: ID does not exist" containerID="1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.251799 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec"} err="failed to get container status \"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec\": rpc error: code = NotFound desc = could not find container \"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec\": container with ID starting with 1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec not found: ID does not exist" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.251823 4909 scope.go:117] "RemoveContainer" containerID="d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.252845 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8"} err="failed to get container status \"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8\": rpc error: code = NotFound desc = could not find container \"d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8\": container with ID starting with d99cd57c80a6288ce05d55cc7d37477845db924e5799e5996eabd345995960a8 not found: ID does not exist" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.252875 4909 scope.go:117] "RemoveContainer" containerID="1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.253119 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec"} err="failed to get container status \"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec\": rpc error: code = NotFound desc = could not find container \"1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec\": container with ID starting with 1db5557f21f208efe95391ea93f24e990eb75618ce88bddb383ef2c493fee7ec not found: ID does not exist" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.253141 4909 scope.go:117] "RemoveContainer" containerID="81887acfac475fba90044a7146243a1f3bbe2dbd1f0f3e8865736148b105cdd0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.269936 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.296653 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.311964 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.318189 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: E1126 07:20:52.318629 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-log" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.318649 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-log" Nov 26 07:20:52 crc kubenswrapper[4909]: E1126 07:20:52.318662 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-metadata" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.318669 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-metadata" Nov 26 07:20:52 crc kubenswrapper[4909]: E1126 07:20:52.318699 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87acf53-c499-4454-b417-a54a78973b10" containerName="kube-state-metrics" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.318705 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87acf53-c499-4454-b417-a54a78973b10" containerName="kube-state-metrics" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.318891 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-log" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.318907 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="960be784-6802-4653-b5bf-df571b61dd8a" containerName="nova-metadata-metadata" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.318920 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87acf53-c499-4454-b417-a54a78973b10" containerName="kube-state-metrics" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.319945 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.323364 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.323638 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.328995 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.336977 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.345162 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.346557 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.349132 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.349308 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.370673 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.511301 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b07954-e19a-4f32-af95-f1e1de784683" path="/var/lib/kubelet/pods/59b07954-e19a-4f32-af95-f1e1de784683/volumes" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.512175 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960be784-6802-4653-b5bf-df571b61dd8a" path="/var/lib/kubelet/pods/960be784-6802-4653-b5bf-df571b61dd8a/volumes" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.512902 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87acf53-c499-4454-b417-a54a78973b10" path="/var/lib/kubelet/pods/b87acf53-c499-4454-b417-a54a78973b10/volumes" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.513866 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.513921 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.514141 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.514184 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-logs\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.514251 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.514343 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25cn\" (UniqueName: \"kubernetes.io/projected/0b222993-a4da-4936-807a-9e99c637bc27-kube-api-access-h25cn\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.514374 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9nj\" (UniqueName: \"kubernetes.io/projected/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-kube-api-access-cq9nj\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.514405 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-config-data\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.514430 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625560 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625736 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625766 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-logs\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625809 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625868 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25cn\" (UniqueName: \"kubernetes.io/projected/0b222993-a4da-4936-807a-9e99c637bc27-kube-api-access-h25cn\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625884 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9nj\" (UniqueName: \"kubernetes.io/projected/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-kube-api-access-cq9nj\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625910 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-config-data\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.625944 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.626039 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.628955 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-logs\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.629174 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.632362 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.639631 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.641691 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.643980 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-config-data\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.644395 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.644948 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25cn\" (UniqueName: \"kubernetes.io/projected/0b222993-a4da-4936-807a-9e99c637bc27-kube-api-access-h25cn\") pod \"kube-state-metrics-0\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.646083 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9nj\" (UniqueName: \"kubernetes.io/projected/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-kube-api-access-cq9nj\") pod \"nova-metadata-0\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " pod="openstack/nova-metadata-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.667885 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.751088 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.751817 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-central-agent" containerID="cri-o://c09272442306797377a34fea6b825f4a3bfc6ac3adcdca53ab3b1b04d411465b" gracePeriod=30 Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.753457 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="proxy-httpd" containerID="cri-o://dec09c7c9b97a0fda4240ea49e5f945d53afc04db48ec6a96e45a8ac7eb601c5" gracePeriod=30 Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.753560 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="sg-core" containerID="cri-o://87737cf7a5b6e04a000e60623e46ccb99f0285de87be4d2bc694518d70089d85" gracePeriod=30 Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.756490 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-notification-agent" containerID="cri-o://23163dadbbe584f437f6be1e20951a2a39fd9926b64bb83d12f127d7596c4452" gracePeriod=30 Nov 26 07:20:52 crc kubenswrapper[4909]: I1126 07:20:52.941740 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.203157 4909 generic.go:334] "Generic (PLEG): container finished" podID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerID="dec09c7c9b97a0fda4240ea49e5f945d53afc04db48ec6a96e45a8ac7eb601c5" exitCode=0 Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.203370 4909 generic.go:334] "Generic (PLEG): container finished" podID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerID="87737cf7a5b6e04a000e60623e46ccb99f0285de87be4d2bc694518d70089d85" exitCode=2 Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.203381 4909 generic.go:334] "Generic (PLEG): container finished" podID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerID="c09272442306797377a34fea6b825f4a3bfc6ac3adcdca53ab3b1b04d411465b" exitCode=0 Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.203453 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerDied","Data":"dec09c7c9b97a0fda4240ea49e5f945d53afc04db48ec6a96e45a8ac7eb601c5"} Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.203476 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerDied","Data":"87737cf7a5b6e04a000e60623e46ccb99f0285de87be4d2bc694518d70089d85"} Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.203488 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerDied","Data":"c09272442306797377a34fea6b825f4a3bfc6ac3adcdca53ab3b1b04d411465b"} Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.207627 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2c6b5670-38ee-4d52-af67-1e187962d73d","Type":"ContainerStarted","Data":"2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4"} Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.207816 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.209481 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ac655ffd-a946-4bf2-90e8-242851bb6dca" containerName="nova-scheduler-scheduler" containerID="cri-o://dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" gracePeriod=30 Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.235798 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.235776541 podStartE2EDuration="2.235776541s" podCreationTimestamp="2025-11-26 07:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:53.223075645 +0000 UTC m=+1225.369286821" watchObservedRunningTime="2025-11-26 07:20:53.235776541 +0000 UTC m=+1225.381987707" Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.251011 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:20:53 crc kubenswrapper[4909]: I1126 07:20:53.469387 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:20:53 crc kubenswrapper[4909]: W1126 07:20:53.477023 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3d8d4c_ae9e_4935_a7fa_3da16867ef88.slice/crio-7bbce2fd741961eef1b1017c8c69bcb1fcf4837b4e4641986186efd8f4dd117a WatchSource:0}: Error finding container 7bbce2fd741961eef1b1017c8c69bcb1fcf4837b4e4641986186efd8f4dd117a: Status 404 returned error can't find the container with id 7bbce2fd741961eef1b1017c8c69bcb1fcf4837b4e4641986186efd8f4dd117a Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.236916 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b222993-a4da-4936-807a-9e99c637bc27","Type":"ContainerStarted","Data":"85ed4d9b155d719cd9be008a478f8c30dd565960a367abc54ca0b3cdd6d157e2"} Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.237221 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b222993-a4da-4936-807a-9e99c637bc27","Type":"ContainerStarted","Data":"bea3f7be85f565776b96e6bf69994caf8e89e1606c65a8fbe7d726497d89357e"} Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.238163 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.241834 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88","Type":"ContainerStarted","Data":"cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6"} Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.241865 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88","Type":"ContainerStarted","Data":"4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2"} Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.241880 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88","Type":"ContainerStarted","Data":"7bbce2fd741961eef1b1017c8c69bcb1fcf4837b4e4641986186efd8f4dd117a"} Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.286525 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.286497291 podStartE2EDuration="2.286497291s" podCreationTimestamp="2025-11-26 07:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:54.274783053 +0000 UTC m=+1226.420994229" watchObservedRunningTime="2025-11-26 07:20:54.286497291 +0000 UTC m=+1226.432708477" Nov 26 07:20:54 crc kubenswrapper[4909]: I1126 07:20:54.291001 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.952922113 podStartE2EDuration="2.290983066s" podCreationTimestamp="2025-11-26 07:20:52 +0000 UTC" firstStartedPulling="2025-11-26 07:20:53.26326137 +0000 UTC m=+1225.409472536" lastFinishedPulling="2025-11-26 07:20:53.601322323 +0000 UTC m=+1225.747533489" observedRunningTime="2025-11-26 07:20:54.258118917 +0000 UTC m=+1226.404330113" watchObservedRunningTime="2025-11-26 07:20:54.290983066 +0000 UTC m=+1226.437194252" Nov 26 07:20:54 crc kubenswrapper[4909]: E1126 07:20:54.424938 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:20:54 crc kubenswrapper[4909]: E1126 07:20:54.426261 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:20:54 crc kubenswrapper[4909]: E1126 07:20:54.427601 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:20:54 crc kubenswrapper[4909]: E1126 07:20:54.427673 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ac655ffd-a946-4bf2-90e8-242851bb6dca" containerName="nova-scheduler-scheduler" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.000255 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.097753 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw7d8\" (UniqueName: \"kubernetes.io/projected/ac655ffd-a946-4bf2-90e8-242851bb6dca-kube-api-access-xw7d8\") pod \"ac655ffd-a946-4bf2-90e8-242851bb6dca\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.097805 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-combined-ca-bundle\") pod \"ac655ffd-a946-4bf2-90e8-242851bb6dca\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.097836 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-config-data\") pod \"ac655ffd-a946-4bf2-90e8-242851bb6dca\" (UID: \"ac655ffd-a946-4bf2-90e8-242851bb6dca\") " Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.103783 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac655ffd-a946-4bf2-90e8-242851bb6dca-kube-api-access-xw7d8" (OuterVolumeSpecName: "kube-api-access-xw7d8") pod "ac655ffd-a946-4bf2-90e8-242851bb6dca" (UID: "ac655ffd-a946-4bf2-90e8-242851bb6dca"). InnerVolumeSpecName "kube-api-access-xw7d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.127915 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac655ffd-a946-4bf2-90e8-242851bb6dca" (UID: "ac655ffd-a946-4bf2-90e8-242851bb6dca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.128318 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-config-data" (OuterVolumeSpecName: "config-data") pod "ac655ffd-a946-4bf2-90e8-242851bb6dca" (UID: "ac655ffd-a946-4bf2-90e8-242851bb6dca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.201044 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw7d8\" (UniqueName: \"kubernetes.io/projected/ac655ffd-a946-4bf2-90e8-242851bb6dca-kube-api-access-xw7d8\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.201077 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.201086 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac655ffd-a946-4bf2-90e8-242851bb6dca-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.260321 4909 generic.go:334] "Generic (PLEG): container finished" podID="ac655ffd-a946-4bf2-90e8-242851bb6dca" containerID="dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" exitCode=0 Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.260358 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac655ffd-a946-4bf2-90e8-242851bb6dca","Type":"ContainerDied","Data":"dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d"} Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.260401 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac655ffd-a946-4bf2-90e8-242851bb6dca","Type":"ContainerDied","Data":"14c074ddb8f3c2c293052fe387f6230b01c2a53bea56b6615c3e8de6e8a5ff78"} Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.260422 4909 scope.go:117] "RemoveContainer" containerID="dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.260739 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.293850 4909 scope.go:117] "RemoveContainer" containerID="dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" Nov 26 07:20:56 crc kubenswrapper[4909]: E1126 07:20:56.294365 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d\": container with ID starting with dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d not found: ID does not exist" containerID="dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.294482 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d"} err="failed to get container status \"dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d\": rpc error: code = NotFound desc = could not find container \"dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d\": container with ID starting with dd9170bff615cf0f1de4597ad094a1c8debebce7ed5f609b598935839625397d not found: ID does not exist" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.297200 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.311574 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.354257 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:56 crc kubenswrapper[4909]: E1126 07:20:56.355333 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac655ffd-a946-4bf2-90e8-242851bb6dca" containerName="nova-scheduler-scheduler" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.355355 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac655ffd-a946-4bf2-90e8-242851bb6dca" containerName="nova-scheduler-scheduler" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.355554 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac655ffd-a946-4bf2-90e8-242851bb6dca" containerName="nova-scheduler-scheduler" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.356376 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.358455 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.367702 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.405736 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-config-data\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.405801 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrwx\" (UniqueName: \"kubernetes.io/projected/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-kube-api-access-bdrwx\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.405865 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.507910 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdrwx\" (UniqueName: \"kubernetes.io/projected/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-kube-api-access-bdrwx\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.507985 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.508167 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-config-data\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.509327 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac655ffd-a946-4bf2-90e8-242851bb6dca" path="/var/lib/kubelet/pods/ac655ffd-a946-4bf2-90e8-242851bb6dca/volumes" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.511382 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-config-data\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.513263 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.524313 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdrwx\" (UniqueName: \"kubernetes.io/projected/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-kube-api-access-bdrwx\") pod \"nova-scheduler-0\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " pod="openstack/nova-scheduler-0" Nov 26 07:20:56 crc kubenswrapper[4909]: I1126 07:20:56.680704 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.188979 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:20:57 crc kubenswrapper[4909]: W1126 07:20:57.192700 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c988744_5b7a_43b4_8d0b_5c34ee90d2d3.slice/crio-a8422c5e6cbf19438ea57ac7bd6bb317084212ce1e3371d3159d41df99563262 WatchSource:0}: Error finding container a8422c5e6cbf19438ea57ac7bd6bb317084212ce1e3371d3159d41df99563262: Status 404 returned error can't find the container with id a8422c5e6cbf19438ea57ac7bd6bb317084212ce1e3371d3159d41df99563262 Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.269258 4909 generic.go:334] "Generic (PLEG): container finished" podID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerID="035a95b7b61112716739cee417c7721c0752de1e7471414dd007c9c6151254a7" exitCode=0 Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.269316 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"692dfe86-909b-41d9-bcf2-19ed88b3b9cb","Type":"ContainerDied","Data":"035a95b7b61112716739cee417c7721c0752de1e7471414dd007c9c6151254a7"} Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.271766 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3","Type":"ContainerStarted","Data":"a8422c5e6cbf19438ea57ac7bd6bb317084212ce1e3371d3159d41df99563262"} Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.274390 4909 generic.go:334] "Generic (PLEG): container finished" podID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerID="23163dadbbe584f437f6be1e20951a2a39fd9926b64bb83d12f127d7596c4452" exitCode=0 Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.274415 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerDied","Data":"23163dadbbe584f437f6be1e20951a2a39fd9926b64bb83d12f127d7596c4452"} Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.299233 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.328592 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldl4b\" (UniqueName: \"kubernetes.io/projected/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-kube-api-access-ldl4b\") pod \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.328832 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-config-data\") pod \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.328917 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-logs\") pod \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.329067 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-combined-ca-bundle\") pod \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\" (UID: \"692dfe86-909b-41d9-bcf2-19ed88b3b9cb\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.331131 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-logs" (OuterVolumeSpecName: "logs") pod "692dfe86-909b-41d9-bcf2-19ed88b3b9cb" (UID: "692dfe86-909b-41d9-bcf2-19ed88b3b9cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.334386 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-kube-api-access-ldl4b" (OuterVolumeSpecName: "kube-api-access-ldl4b") pod "692dfe86-909b-41d9-bcf2-19ed88b3b9cb" (UID: "692dfe86-909b-41d9-bcf2-19ed88b3b9cb"). InnerVolumeSpecName "kube-api-access-ldl4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.369274 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-config-data" (OuterVolumeSpecName: "config-data") pod "692dfe86-909b-41d9-bcf2-19ed88b3b9cb" (UID: "692dfe86-909b-41d9-bcf2-19ed88b3b9cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.377788 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.385025 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "692dfe86-909b-41d9-bcf2-19ed88b3b9cb" (UID: "692dfe86-909b-41d9-bcf2-19ed88b3b9cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.430722 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89t4\" (UniqueName: \"kubernetes.io/projected/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-kube-api-access-h89t4\") pod \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.430762 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-scripts\") pod \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.430790 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-sg-core-conf-yaml\") pod \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.430826 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-config-data\") pod \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.430910 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-run-httpd\") pod \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.431025 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-combined-ca-bundle\") pod \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.431064 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-log-httpd\") pod \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\" (UID: \"674d21c8-e4a1-426b-a92f-0cb3ce669cdd\") " Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.431634 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "674d21c8-e4a1-426b-a92f-0cb3ce669cdd" (UID: "674d21c8-e4a1-426b-a92f-0cb3ce669cdd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.431749 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "674d21c8-e4a1-426b-a92f-0cb3ce669cdd" (UID: "674d21c8-e4a1-426b-a92f-0cb3ce669cdd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.432099 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.432116 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldl4b\" (UniqueName: \"kubernetes.io/projected/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-kube-api-access-ldl4b\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.432126 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.432135 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.432173 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.432185 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692dfe86-909b-41d9-bcf2-19ed88b3b9cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.434795 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-scripts" (OuterVolumeSpecName: "scripts") pod "674d21c8-e4a1-426b-a92f-0cb3ce669cdd" (UID: "674d21c8-e4a1-426b-a92f-0cb3ce669cdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.435293 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-kube-api-access-h89t4" (OuterVolumeSpecName: "kube-api-access-h89t4") pod "674d21c8-e4a1-426b-a92f-0cb3ce669cdd" (UID: "674d21c8-e4a1-426b-a92f-0cb3ce669cdd"). InnerVolumeSpecName "kube-api-access-h89t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.462989 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "674d21c8-e4a1-426b-a92f-0cb3ce669cdd" (UID: "674d21c8-e4a1-426b-a92f-0cb3ce669cdd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.533527 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-config-data" (OuterVolumeSpecName: "config-data") pod "674d21c8-e4a1-426b-a92f-0cb3ce669cdd" (UID: "674d21c8-e4a1-426b-a92f-0cb3ce669cdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.533984 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "674d21c8-e4a1-426b-a92f-0cb3ce669cdd" (UID: "674d21c8-e4a1-426b-a92f-0cb3ce669cdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.534994 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.535028 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89t4\" (UniqueName: \"kubernetes.io/projected/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-kube-api-access-h89t4\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.535041 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.535052 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.535062 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674d21c8-e4a1-426b-a92f-0cb3ce669cdd-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.943379 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 07:20:57 crc kubenswrapper[4909]: I1126 07:20:57.943726 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.290097 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"674d21c8-e4a1-426b-a92f-0cb3ce669cdd","Type":"ContainerDied","Data":"d3a92d9bbde00b29cca80ffd9134d6e714dbf71980866ecbb57b9644d7476228"} Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.290168 4909 scope.go:117] "RemoveContainer" containerID="dec09c7c9b97a0fda4240ea49e5f945d53afc04db48ec6a96e45a8ac7eb601c5" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.290328 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.294874 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3","Type":"ContainerStarted","Data":"12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68"} Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.306059 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"692dfe86-909b-41d9-bcf2-19ed88b3b9cb","Type":"ContainerDied","Data":"f24f187a66189109f73298b28149997aa4f1debc6a24d54c4b925ab7d7c5bb1c"} Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.306162 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.327174 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.32715061 podStartE2EDuration="2.32715061s" podCreationTimestamp="2025-11-26 07:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:20:58.313751595 +0000 UTC m=+1230.459962761" watchObservedRunningTime="2025-11-26 07:20:58.32715061 +0000 UTC m=+1230.473361786" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.353761 4909 scope.go:117] "RemoveContainer" containerID="87737cf7a5b6e04a000e60623e46ccb99f0285de87be4d2bc694518d70089d85" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.378717 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.400525 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.407719 4909 scope.go:117] "RemoveContainer" containerID="23163dadbbe584f437f6be1e20951a2a39fd9926b64bb83d12f127d7596c4452" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.411028 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.424059 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.443397 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: E1126 07:20:58.444122 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-log" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444143 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-log" Nov 26 07:20:58 crc kubenswrapper[4909]: E1126 07:20:58.444173 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="sg-core" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444181 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="sg-core" Nov 26 07:20:58 crc kubenswrapper[4909]: E1126 07:20:58.444197 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-central-agent" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444205 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-central-agent" Nov 26 07:20:58 crc kubenswrapper[4909]: E1126 07:20:58.444214 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-notification-agent" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444222 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-notification-agent" Nov 26 07:20:58 crc kubenswrapper[4909]: E1126 07:20:58.444252 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="proxy-httpd" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444259 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="proxy-httpd" Nov 26 07:20:58 crc kubenswrapper[4909]: E1126 07:20:58.444271 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-api" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444278 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-api" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444548 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-central-agent" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444574 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-api" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444607 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="sg-core" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444625 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="ceilometer-notification-agent" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444639 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" containerName="proxy-httpd" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.444659 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" containerName="nova-api-log" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.446137 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.448854 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.450089 4909 scope.go:117] "RemoveContainer" containerID="c09272442306797377a34fea6b825f4a3bfc6ac3adcdca53ab3b1b04d411465b" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.452180 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.462329 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.479358 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.483728 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.484222 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.484549 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.493581 4909 scope.go:117] "RemoveContainer" containerID="035a95b7b61112716739cee417c7721c0752de1e7471414dd007c9c6151254a7" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.499203 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.528235 4909 scope.go:117] "RemoveContainer" containerID="a34c38ba223c2da590058140a9a5113c8d320f00acffe03647c7b7617e414d9b" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.529936 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674d21c8-e4a1-426b-a92f-0cb3ce669cdd" path="/var/lib/kubelet/pods/674d21c8-e4a1-426b-a92f-0cb3ce669cdd/volumes" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.530749 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692dfe86-909b-41d9-bcf2-19ed88b3b9cb" path="/var/lib/kubelet/pods/692dfe86-909b-41d9-bcf2-19ed88b3b9cb/volumes" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550153 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-config-data\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550210 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-scripts\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550286 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550309 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm7cn\" (UniqueName: \"kubernetes.io/projected/6e2e665c-08b4-42fc-98a9-94b6b40db551-kube-api-access-lm7cn\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550337 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-log-httpd\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550379 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-run-httpd\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550407 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hps4q\" (UniqueName: \"kubernetes.io/projected/25d36615-9326-4d16-8f55-a4166ac2f555-kube-api-access-hps4q\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550525 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550549 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2e665c-08b4-42fc-98a9-94b6b40db551-logs\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550573 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550624 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.550654 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-config-data\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652230 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652281 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm7cn\" (UniqueName: \"kubernetes.io/projected/6e2e665c-08b4-42fc-98a9-94b6b40db551-kube-api-access-lm7cn\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652317 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-log-httpd\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652364 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-run-httpd\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652393 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hps4q\" (UniqueName: \"kubernetes.io/projected/25d36615-9326-4d16-8f55-a4166ac2f555-kube-api-access-hps4q\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652437 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652458 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2e665c-08b4-42fc-98a9-94b6b40db551-logs\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652485 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652519 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652549 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-config-data\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652598 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-config-data\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652653 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-scripts\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652963 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2e665c-08b4-42fc-98a9-94b6b40db551-logs\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.652971 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-log-httpd\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.653173 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-run-httpd\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.658384 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.658540 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.658629 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-scripts\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.658833 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.660502 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-config-data\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.666342 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.668949 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hps4q\" (UniqueName: \"kubernetes.io/projected/25d36615-9326-4d16-8f55-a4166ac2f555-kube-api-access-hps4q\") pod \"ceilometer-0\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " pod="openstack/ceilometer-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.669532 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-config-data\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.680264 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm7cn\" (UniqueName: \"kubernetes.io/projected/6e2e665c-08b4-42fc-98a9-94b6b40db551-kube-api-access-lm7cn\") pod \"nova-api-0\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.767808 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:20:58 crc kubenswrapper[4909]: I1126 07:20:58.805298 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:20:59 crc kubenswrapper[4909]: I1126 07:20:59.273619 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:20:59 crc kubenswrapper[4909]: W1126 07:20:59.277891 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e2e665c_08b4_42fc_98a9_94b6b40db551.slice/crio-3234ecca36c2a0d612f914dd2152a59383eda775ecf0c8a6686b7fcd25abf65b WatchSource:0}: Error finding container 3234ecca36c2a0d612f914dd2152a59383eda775ecf0c8a6686b7fcd25abf65b: Status 404 returned error can't find the container with id 3234ecca36c2a0d612f914dd2152a59383eda775ecf0c8a6686b7fcd25abf65b Nov 26 07:20:59 crc kubenswrapper[4909]: W1126 07:20:59.287505 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d36615_9326_4d16_8f55_a4166ac2f555.slice/crio-6b769b41ba5f941dec07d5593036c74d8292bffcfc334f1b4c5b38a5836dd789 WatchSource:0}: Error finding container 6b769b41ba5f941dec07d5593036c74d8292bffcfc334f1b4c5b38a5836dd789: Status 404 returned error can't find the container with id 6b769b41ba5f941dec07d5593036c74d8292bffcfc334f1b4c5b38a5836dd789 Nov 26 07:20:59 crc kubenswrapper[4909]: I1126 07:20:59.287772 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:20:59 crc kubenswrapper[4909]: I1126 07:20:59.316136 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerStarted","Data":"6b769b41ba5f941dec07d5593036c74d8292bffcfc334f1b4c5b38a5836dd789"} Nov 26 07:20:59 crc kubenswrapper[4909]: I1126 07:20:59.318043 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e2e665c-08b4-42fc-98a9-94b6b40db551","Type":"ContainerStarted","Data":"3234ecca36c2a0d612f914dd2152a59383eda775ecf0c8a6686b7fcd25abf65b"} Nov 26 07:21:00 crc kubenswrapper[4909]: I1126 07:21:00.338999 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerStarted","Data":"007766ce28d2af469b493d247580292d5eedccd341f1251422acd113e44a6db3"} Nov 26 07:21:00 crc kubenswrapper[4909]: I1126 07:21:00.343071 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e2e665c-08b4-42fc-98a9-94b6b40db551","Type":"ContainerStarted","Data":"ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526"} Nov 26 07:21:00 crc kubenswrapper[4909]: I1126 07:21:00.343116 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e2e665c-08b4-42fc-98a9-94b6b40db551","Type":"ContainerStarted","Data":"ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446"} Nov 26 07:21:00 crc kubenswrapper[4909]: I1126 07:21:00.365697 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.365675879 podStartE2EDuration="2.365675879s" podCreationTimestamp="2025-11-26 07:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:00.36070132 +0000 UTC m=+1232.506912506" watchObservedRunningTime="2025-11-26 07:21:00.365675879 +0000 UTC m=+1232.511887075" Nov 26 07:21:01 crc kubenswrapper[4909]: I1126 07:21:01.354782 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerStarted","Data":"7d64f94a2c4be18cdf26a6cd03c5145beaba741970e383df66509330124c3fa2"} Nov 26 07:21:01 crc kubenswrapper[4909]: I1126 07:21:01.355068 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerStarted","Data":"430d2c18966a3f361b936b46ac851fceeca7577b96c68b018f615b3aeaa90008"} Nov 26 07:21:01 crc kubenswrapper[4909]: I1126 07:21:01.578007 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 26 07:21:01 crc kubenswrapper[4909]: I1126 07:21:01.681607 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 07:21:02 crc kubenswrapper[4909]: I1126 07:21:02.684425 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 26 07:21:02 crc kubenswrapper[4909]: I1126 07:21:02.943263 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 07:21:02 crc kubenswrapper[4909]: I1126 07:21:02.943580 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 07:21:03 crc kubenswrapper[4909]: I1126 07:21:03.378845 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerStarted","Data":"1b2abf35840ff897773209fad488c882281c30bb25231cceb29ba813a82dc5a4"} Nov 26 07:21:03 crc kubenswrapper[4909]: I1126 07:21:03.379219 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:21:03 crc kubenswrapper[4909]: I1126 07:21:03.398820 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.342793408 podStartE2EDuration="5.398795196s" podCreationTimestamp="2025-11-26 07:20:58 +0000 UTC" firstStartedPulling="2025-11-26 07:20:59.308462967 +0000 UTC m=+1231.454674133" lastFinishedPulling="2025-11-26 07:21:02.364464745 +0000 UTC m=+1234.510675921" observedRunningTime="2025-11-26 07:21:03.3942543 +0000 UTC m=+1235.540465466" watchObservedRunningTime="2025-11-26 07:21:03.398795196 +0000 UTC m=+1235.545006362" Nov 26 07:21:03 crc kubenswrapper[4909]: I1126 07:21:03.960817 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:03 crc kubenswrapper[4909]: I1126 07:21:03.960862 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:06 crc kubenswrapper[4909]: I1126 07:21:06.681632 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 07:21:06 crc kubenswrapper[4909]: I1126 07:21:06.719329 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 07:21:07 crc kubenswrapper[4909]: I1126 07:21:07.463490 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 07:21:08 crc kubenswrapper[4909]: I1126 07:21:08.768864 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 07:21:08 crc kubenswrapper[4909]: I1126 07:21:08.769299 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 07:21:09 crc kubenswrapper[4909]: I1126 07:21:09.851845 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:09 crc kubenswrapper[4909]: I1126 07:21:09.851925 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:12 crc kubenswrapper[4909]: I1126 07:21:12.949651 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 07:21:12 crc kubenswrapper[4909]: I1126 07:21:12.951423 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 07:21:12 crc kubenswrapper[4909]: I1126 07:21:12.956388 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 07:21:13 crc kubenswrapper[4909]: I1126 07:21:13.499446 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.380867 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.478051 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c829k\" (UniqueName: \"kubernetes.io/projected/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-kube-api-access-c829k\") pod \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.478253 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-combined-ca-bundle\") pod \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.478541 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-config-data\") pod \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\" (UID: \"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90\") " Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.488615 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-kube-api-access-c829k" (OuterVolumeSpecName: "kube-api-access-c829k") pod "f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" (UID: "f8c42f1d-d209-4476-9e6b-5a54fdd4ac90"). InnerVolumeSpecName "kube-api-access-c829k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.516129 4909 generic.go:334] "Generic (PLEG): container finished" podID="f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" containerID="76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5" exitCode=137 Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.516197 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90","Type":"ContainerDied","Data":"76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5"} Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.516253 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8c42f1d-d209-4476-9e6b-5a54fdd4ac90","Type":"ContainerDied","Data":"72d875b3e75c2b99c0634a564d755deadd468c80550525fd32b1db8bacbbac35"} Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.516275 4909 scope.go:117] "RemoveContainer" containerID="76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.516212 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.520399 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-config-data" (OuterVolumeSpecName: "config-data") pod "f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" (UID: "f8c42f1d-d209-4476-9e6b-5a54fdd4ac90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.525976 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" (UID: "f8c42f1d-d209-4476-9e6b-5a54fdd4ac90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.582114 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c829k\" (UniqueName: \"kubernetes.io/projected/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-kube-api-access-c829k\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.582450 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.582486 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.587296 4909 scope.go:117] "RemoveContainer" containerID="76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5" Nov 26 07:21:15 crc kubenswrapper[4909]: E1126 07:21:15.587732 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5\": container with ID starting with 76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5 not found: ID does not exist" containerID="76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.587768 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5"} err="failed to get container status \"76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5\": rpc error: code = NotFound desc = could not find container \"76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5\": container with ID starting with 76d9dd8633d5eeb5719445944309a144b7740d34ceaab45a21e307079f6442a5 not found: ID does not exist" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.861732 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.870581 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.891680 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:21:15 crc kubenswrapper[4909]: E1126 07:21:15.892377 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.892406 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.896773 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.898096 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.900060 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.901444 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.901764 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.912217 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.991242 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.991287 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.991311 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q54r\" (UniqueName: \"kubernetes.io/projected/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-kube-api-access-5q54r\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.991355 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:15 crc kubenswrapper[4909]: I1126 07:21:15.991429 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.092971 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.093019 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.093040 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q54r\" (UniqueName: \"kubernetes.io/projected/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-kube-api-access-5q54r\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.093084 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.093120 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.099551 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.099894 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.101978 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.110069 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.125008 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q54r\" (UniqueName: \"kubernetes.io/projected/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-kube-api-access-5q54r\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.227402 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.509389 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c42f1d-d209-4476-9e6b-5a54fdd4ac90" path="/var/lib/kubelet/pods/f8c42f1d-d209-4476-9e6b-5a54fdd4ac90/volumes" Nov 26 07:21:16 crc kubenswrapper[4909]: I1126 07:21:16.687822 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:21:16 crc kubenswrapper[4909]: W1126 07:21:16.696806 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee5f38e5_ecb6_4284_85d8_a2c93db1cfa4.slice/crio-3e1fa8da909e5f2a3e31f2d41cf514d44119015cc6d1f43c8ce67e358c6b4419 WatchSource:0}: Error finding container 3e1fa8da909e5f2a3e31f2d41cf514d44119015cc6d1f43c8ce67e358c6b4419: Status 404 returned error can't find the container with id 3e1fa8da909e5f2a3e31f2d41cf514d44119015cc6d1f43c8ce67e358c6b4419 Nov 26 07:21:17 crc kubenswrapper[4909]: I1126 07:21:17.537075 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4","Type":"ContainerStarted","Data":"fc4e1d11210ea94d5e20c39c83a322bf0f7dc51504c8b4db99b77d2610531017"} Nov 26 07:21:17 crc kubenswrapper[4909]: I1126 07:21:17.537419 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4","Type":"ContainerStarted","Data":"3e1fa8da909e5f2a3e31f2d41cf514d44119015cc6d1f43c8ce67e358c6b4419"} Nov 26 07:21:17 crc kubenswrapper[4909]: I1126 07:21:17.562127 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.562107522 podStartE2EDuration="2.562107522s" podCreationTimestamp="2025-11-26 07:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:17.561130305 +0000 UTC m=+1249.707341481" watchObservedRunningTime="2025-11-26 07:21:17.562107522 +0000 UTC m=+1249.708318688" Nov 26 07:21:18 crc kubenswrapper[4909]: I1126 07:21:18.771444 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 07:21:18 crc kubenswrapper[4909]: I1126 07:21:18.772202 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 07:21:18 crc kubenswrapper[4909]: I1126 07:21:18.776451 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 07:21:18 crc kubenswrapper[4909]: I1126 07:21:18.780250 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.567023 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.571833 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.809222 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dtqxp"] Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.814428 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.885800 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dtqxp"] Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.995155 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-config\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.995344 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.995501 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.995541 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.995565 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjv9r\" (UniqueName: \"kubernetes.io/projected/acbd9367-38fb-4a1d-b818-c0bd4893c0de-kube-api-access-bjv9r\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:19 crc kubenswrapper[4909]: I1126 07:21:19.995767 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.098656 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-config\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.099889 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.100783 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-config\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.098761 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.100953 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.100989 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.101021 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjv9r\" (UniqueName: \"kubernetes.io/projected/acbd9367-38fb-4a1d-b818-c0bd4893c0de-kube-api-access-bjv9r\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.101726 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.101860 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.102471 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.102494 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.129091 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjv9r\" (UniqueName: \"kubernetes.io/projected/acbd9367-38fb-4a1d-b818-c0bd4893c0de-kube-api-access-bjv9r\") pod \"dnsmasq-dns-59cf4bdb65-dtqxp\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.160617 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:20 crc kubenswrapper[4909]: I1126 07:21:20.635679 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dtqxp"] Nov 26 07:21:20 crc kubenswrapper[4909]: W1126 07:21:20.641834 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbd9367_38fb_4a1d_b818_c0bd4893c0de.slice/crio-52e6c56d17f7a05e7f9cf285eedb00e1845e4aa1d7dd78c81f4bf2f9dd2f6204 WatchSource:0}: Error finding container 52e6c56d17f7a05e7f9cf285eedb00e1845e4aa1d7dd78c81f4bf2f9dd2f6204: Status 404 returned error can't find the container with id 52e6c56d17f7a05e7f9cf285eedb00e1845e4aa1d7dd78c81f4bf2f9dd2f6204 Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.227533 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.585687 4909 generic.go:334] "Generic (PLEG): container finished" podID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerID="f378d7762407801075bfcb45c33cc2c4c74d6b521a7a2e5f2082de9945e6ffe6" exitCode=0 Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.585738 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" event={"ID":"acbd9367-38fb-4a1d-b818-c0bd4893c0de","Type":"ContainerDied","Data":"f378d7762407801075bfcb45c33cc2c4c74d6b521a7a2e5f2082de9945e6ffe6"} Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.585778 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" event={"ID":"acbd9367-38fb-4a1d-b818-c0bd4893c0de","Type":"ContainerStarted","Data":"52e6c56d17f7a05e7f9cf285eedb00e1845e4aa1d7dd78c81f4bf2f9dd2f6204"} Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.755278 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.755650 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-central-agent" containerID="cri-o://007766ce28d2af469b493d247580292d5eedccd341f1251422acd113e44a6db3" gracePeriod=30 Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.755698 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="sg-core" containerID="cri-o://7d64f94a2c4be18cdf26a6cd03c5145beaba741970e383df66509330124c3fa2" gracePeriod=30 Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.755746 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="proxy-httpd" containerID="cri-o://1b2abf35840ff897773209fad488c882281c30bb25231cceb29ba813a82dc5a4" gracePeriod=30 Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.755779 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-notification-agent" containerID="cri-o://430d2c18966a3f361b936b46ac851fceeca7577b96c68b018f615b3aeaa90008" gracePeriod=30 Nov 26 07:21:21 crc kubenswrapper[4909]: I1126 07:21:21.780505 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.197:3000/\": EOF" Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.610561 4909 generic.go:334] "Generic (PLEG): container finished" podID="25d36615-9326-4d16-8f55-a4166ac2f555" containerID="1b2abf35840ff897773209fad488c882281c30bb25231cceb29ba813a82dc5a4" exitCode=0 Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.610923 4909 generic.go:334] "Generic (PLEG): container finished" podID="25d36615-9326-4d16-8f55-a4166ac2f555" containerID="7d64f94a2c4be18cdf26a6cd03c5145beaba741970e383df66509330124c3fa2" exitCode=2 Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.610933 4909 generic.go:334] "Generic (PLEG): container finished" podID="25d36615-9326-4d16-8f55-a4166ac2f555" containerID="007766ce28d2af469b493d247580292d5eedccd341f1251422acd113e44a6db3" exitCode=0 Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.610714 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerDied","Data":"1b2abf35840ff897773209fad488c882281c30bb25231cceb29ba813a82dc5a4"} Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.610991 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerDied","Data":"7d64f94a2c4be18cdf26a6cd03c5145beaba741970e383df66509330124c3fa2"} Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.611001 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerDied","Data":"007766ce28d2af469b493d247580292d5eedccd341f1251422acd113e44a6db3"} Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.620083 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" event={"ID":"acbd9367-38fb-4a1d-b818-c0bd4893c0de","Type":"ContainerStarted","Data":"b3791de9e2272db503f056aa00b550a13134003b62ae8874f22a308095ddd4a6"} Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.623886 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.659085 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" podStartSLOduration=3.659032696 podStartE2EDuration="3.659032696s" podCreationTimestamp="2025-11-26 07:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:22.64486799 +0000 UTC m=+1254.791079196" watchObservedRunningTime="2025-11-26 07:21:22.659032696 +0000 UTC m=+1254.805243872" Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.678192 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.678576 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-api" containerID="cri-o://ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526" gracePeriod=30 Nov 26 07:21:22 crc kubenswrapper[4909]: I1126 07:21:22.679118 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-log" containerID="cri-o://ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446" gracePeriod=30 Nov 26 07:21:23 crc kubenswrapper[4909]: I1126 07:21:23.630040 4909 generic.go:334] "Generic (PLEG): container finished" podID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerID="ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446" exitCode=143 Nov 26 07:21:23 crc kubenswrapper[4909]: I1126 07:21:23.630080 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e2e665c-08b4-42fc-98a9-94b6b40db551","Type":"ContainerDied","Data":"ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446"} Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.228190 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.263973 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.452377 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.624291 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-combined-ca-bundle\") pod \"6e2e665c-08b4-42fc-98a9-94b6b40db551\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.624387 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2e665c-08b4-42fc-98a9-94b6b40db551-logs\") pod \"6e2e665c-08b4-42fc-98a9-94b6b40db551\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.624514 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm7cn\" (UniqueName: \"kubernetes.io/projected/6e2e665c-08b4-42fc-98a9-94b6b40db551-kube-api-access-lm7cn\") pod \"6e2e665c-08b4-42fc-98a9-94b6b40db551\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.624657 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-config-data\") pod \"6e2e665c-08b4-42fc-98a9-94b6b40db551\" (UID: \"6e2e665c-08b4-42fc-98a9-94b6b40db551\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.630898 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2e665c-08b4-42fc-98a9-94b6b40db551-logs" (OuterVolumeSpecName: "logs") pod "6e2e665c-08b4-42fc-98a9-94b6b40db551" (UID: "6e2e665c-08b4-42fc-98a9-94b6b40db551"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.666307 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2e665c-08b4-42fc-98a9-94b6b40db551-kube-api-access-lm7cn" (OuterVolumeSpecName: "kube-api-access-lm7cn") pod "6e2e665c-08b4-42fc-98a9-94b6b40db551" (UID: "6e2e665c-08b4-42fc-98a9-94b6b40db551"). InnerVolumeSpecName "kube-api-access-lm7cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.709144 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e2e665c-08b4-42fc-98a9-94b6b40db551" (UID: "6e2e665c-08b4-42fc-98a9-94b6b40db551"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.716831 4909 generic.go:334] "Generic (PLEG): container finished" podID="25d36615-9326-4d16-8f55-a4166ac2f555" containerID="430d2c18966a3f361b936b46ac851fceeca7577b96c68b018f615b3aeaa90008" exitCode=0 Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.716913 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerDied","Data":"430d2c18966a3f361b936b46ac851fceeca7577b96c68b018f615b3aeaa90008"} Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.720997 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-config-data" (OuterVolumeSpecName: "config-data") pod "6e2e665c-08b4-42fc-98a9-94b6b40db551" (UID: "6e2e665c-08b4-42fc-98a9-94b6b40db551"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.725294 4909 generic.go:334] "Generic (PLEG): container finished" podID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerID="ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526" exitCode=0 Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.727159 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.728639 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.729286 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2e665c-08b4-42fc-98a9-94b6b40db551-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.729298 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm7cn\" (UniqueName: \"kubernetes.io/projected/6e2e665c-08b4-42fc-98a9-94b6b40db551-kube-api-access-lm7cn\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.729355 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2e665c-08b4-42fc-98a9-94b6b40db551-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.729723 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e2e665c-08b4-42fc-98a9-94b6b40db551","Type":"ContainerDied","Data":"ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526"} Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.729771 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e2e665c-08b4-42fc-98a9-94b6b40db551","Type":"ContainerDied","Data":"3234ecca36c2a0d612f914dd2152a59383eda775ecf0c8a6686b7fcd25abf65b"} Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.729792 4909 scope.go:117] "RemoveContainer" containerID="ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.744956 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.747400 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.760877 4909 scope.go:117] "RemoveContainer" containerID="ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.803282 4909 scope.go:117] "RemoveContainer" containerID="ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.803625 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.803969 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526\": container with ID starting with ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526 not found: ID does not exist" containerID="ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.804005 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526"} err="failed to get container status \"ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526\": rpc error: code = NotFound desc = could not find container \"ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526\": container with ID starting with ae98b6924ae1d9e2fcdaa40fee27fc2d25f2a471fb26911b7f97d27b1df4b526 not found: ID does not exist" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.804033 4909 scope.go:117] "RemoveContainer" containerID="ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446" Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.804345 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446\": container with ID starting with ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446 not found: ID does not exist" containerID="ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.804378 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446"} err="failed to get container status \"ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446\": rpc error: code = NotFound desc = could not find container \"ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446\": container with ID starting with ea13b70271686ab8f96e457622ba22e8e0b215cc45b20e97a88d520512587446 not found: ID does not exist" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.830124 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-run-httpd\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.830232 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-config-data\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.830277 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-log-httpd\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.830344 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hps4q\" (UniqueName: \"kubernetes.io/projected/25d36615-9326-4d16-8f55-a4166ac2f555-kube-api-access-hps4q\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.830429 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-scripts\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.830500 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-sg-core-conf-yaml\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.830646 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-combined-ca-bundle\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.831007 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-ceilometer-tls-certs\") pod \"25d36615-9326-4d16-8f55-a4166ac2f555\" (UID: \"25d36615-9326-4d16-8f55-a4166ac2f555\") " Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.832686 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.832998 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.836998 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d36615-9326-4d16-8f55-a4166ac2f555-kube-api-access-hps4q" (OuterVolumeSpecName: "kube-api-access-hps4q") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "kube-api-access-hps4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.837260 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.851970 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-scripts" (OuterVolumeSpecName: "scripts") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864097 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.864570 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="proxy-httpd" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864599 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="proxy-httpd" Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.864629 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-log" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864635 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-log" Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.864645 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-central-agent" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864651 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-central-agent" Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.864671 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-api" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864677 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-api" Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.864686 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="sg-core" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864691 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="sg-core" Nov 26 07:21:26 crc kubenswrapper[4909]: E1126 07:21:26.864702 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-notification-agent" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864710 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-notification-agent" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864872 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="proxy-httpd" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864890 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-api" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864900 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-central-agent" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864907 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="sg-core" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864918 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" containerName="nova-api-log" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.864927 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" containerName="ceilometer-notification-agent" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.865889 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.868903 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.869209 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.869315 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.876863 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.902539 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.928706 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wfpgz"] Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.930049 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.933322 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.937012 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.937892 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wfpgz"] Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.938390 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.938419 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hps4q\" (UniqueName: \"kubernetes.io/projected/25d36615-9326-4d16-8f55-a4166ac2f555-kube-api-access-hps4q\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.938428 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.938438 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.938447 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25d36615-9326-4d16-8f55-a4166ac2f555-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:26 crc kubenswrapper[4909]: I1126 07:21:26.947813 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.040729 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9w5\" (UniqueName: \"kubernetes.io/projected/7af36523-50b5-4889-90d0-3cc4cee1d063-kube-api-access-xd9w5\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.040793 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-scripts\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.040847 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.040887 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-config-data\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.040915 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.040965 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-config-data\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.041021 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af36523-50b5-4889-90d0-3cc4cee1d063-logs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.041065 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztk9n\" (UniqueName: \"kubernetes.io/projected/85fee0d5-ad66-45f3-8bc3-1820cc137b65-kube-api-access-ztk9n\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.041092 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.041152 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-public-tls-certs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.041209 4909 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.045125 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.056684 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-config-data" (OuterVolumeSpecName: "config-data") pod "25d36615-9326-4d16-8f55-a4166ac2f555" (UID: "25d36615-9326-4d16-8f55-a4166ac2f555"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.143546 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-config-data\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144273 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af36523-50b5-4889-90d0-3cc4cee1d063-logs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144320 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztk9n\" (UniqueName: \"kubernetes.io/projected/85fee0d5-ad66-45f3-8bc3-1820cc137b65-kube-api-access-ztk9n\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144340 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144760 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-public-tls-certs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144808 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd9w5\" (UniqueName: \"kubernetes.io/projected/7af36523-50b5-4889-90d0-3cc4cee1d063-kube-api-access-xd9w5\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144832 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-scripts\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144878 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144941 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-config-data\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.144963 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.145018 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.145033 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d36615-9326-4d16-8f55-a4166ac2f555-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.145905 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af36523-50b5-4889-90d0-3cc4cee1d063-logs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.147954 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-config-data\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.148267 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.149357 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-config-data\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.149476 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-scripts\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.151151 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-public-tls-certs\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.154330 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.154886 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.163497 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd9w5\" (UniqueName: \"kubernetes.io/projected/7af36523-50b5-4889-90d0-3cc4cee1d063-kube-api-access-xd9w5\") pod \"nova-api-0\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.163888 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztk9n\" (UniqueName: \"kubernetes.io/projected/85fee0d5-ad66-45f3-8bc3-1820cc137b65-kube-api-access-ztk9n\") pod \"nova-cell1-cell-mapping-wfpgz\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.193634 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.211734 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.697359 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.736675 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25d36615-9326-4d16-8f55-a4166ac2f555","Type":"ContainerDied","Data":"6b769b41ba5f941dec07d5593036c74d8292bffcfc334f1b4c5b38a5836dd789"} Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.736757 4909 scope.go:117] "RemoveContainer" containerID="1b2abf35840ff897773209fad488c882281c30bb25231cceb29ba813a82dc5a4" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.736779 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.741270 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af36523-50b5-4889-90d0-3cc4cee1d063","Type":"ContainerStarted","Data":"6043283d4fb99ce16dcfba26b026d33a1124af6d8ea6524a41241868e502425a"} Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.760760 4909 scope.go:117] "RemoveContainer" containerID="7d64f94a2c4be18cdf26a6cd03c5145beaba741970e383df66509330124c3fa2" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.791654 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.794824 4909 scope.go:117] "RemoveContainer" containerID="430d2c18966a3f361b936b46ac851fceeca7577b96c68b018f615b3aeaa90008" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.817768 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:21:27 crc kubenswrapper[4909]: W1126 07:21:27.818124 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85fee0d5_ad66_45f3_8bc3_1820cc137b65.slice/crio-3984e6cc9d230c32a75bc19343ebe9674c859f2b2d2be38daff00c4241b543fb WatchSource:0}: Error finding container 3984e6cc9d230c32a75bc19343ebe9674c859f2b2d2be38daff00c4241b543fb: Status 404 returned error can't find the container with id 3984e6cc9d230c32a75bc19343ebe9674c859f2b2d2be38daff00c4241b543fb Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.829077 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wfpgz"] Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.838407 4909 scope.go:117] "RemoveContainer" containerID="007766ce28d2af469b493d247580292d5eedccd341f1251422acd113e44a6db3" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.850680 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.853575 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.859009 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.859164 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.859253 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.859313 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.964798 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-config-data\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.964851 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-run-httpd\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.964871 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.964906 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.964923 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xch\" (UniqueName: \"kubernetes.io/projected/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-kube-api-access-64xch\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.964949 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-log-httpd\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.965005 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:27 crc kubenswrapper[4909]: I1126 07:21:27.965028 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-scripts\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.066544 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-run-httpd\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.066894 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.066942 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xch\" (UniqueName: \"kubernetes.io/projected/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-kube-api-access-64xch\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.066960 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.066994 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-log-httpd\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.067046 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.067072 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-scripts\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.067150 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-config-data\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.067708 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-log-httpd\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.068210 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-run-httpd\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.081415 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.081962 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-config-data\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.083430 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xch\" (UniqueName: \"kubernetes.io/projected/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-kube-api-access-64xch\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.088885 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.090881 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.103497 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-scripts\") pod \"ceilometer-0\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.178444 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.513347 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d36615-9326-4d16-8f55-a4166ac2f555" path="/var/lib/kubelet/pods/25d36615-9326-4d16-8f55-a4166ac2f555/volumes" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.514699 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2e665c-08b4-42fc-98a9-94b6b40db551" path="/var/lib/kubelet/pods/6e2e665c-08b4-42fc-98a9-94b6b40db551/volumes" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.669009 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.756297 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af36523-50b5-4889-90d0-3cc4cee1d063","Type":"ContainerStarted","Data":"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c"} Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.756362 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af36523-50b5-4889-90d0-3cc4cee1d063","Type":"ContainerStarted","Data":"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda"} Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.759422 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerStarted","Data":"cc26ec16cdbbbf1e5d30fb8087a8bc4ef1981749f48beef8a580156f5d5b2bb4"} Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.761385 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wfpgz" event={"ID":"85fee0d5-ad66-45f3-8bc3-1820cc137b65","Type":"ContainerStarted","Data":"116eab90b476caef43b56d5a61af14c4c7625f3f13c935ccae4c8e2d51b7d92e"} Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.761420 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wfpgz" event={"ID":"85fee0d5-ad66-45f3-8bc3-1820cc137b65","Type":"ContainerStarted","Data":"3984e6cc9d230c32a75bc19343ebe9674c859f2b2d2be38daff00c4241b543fb"} Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.785173 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.785147607 podStartE2EDuration="2.785147607s" podCreationTimestamp="2025-11-26 07:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:28.774975053 +0000 UTC m=+1260.921186249" watchObservedRunningTime="2025-11-26 07:21:28.785147607 +0000 UTC m=+1260.931358783" Nov 26 07:21:28 crc kubenswrapper[4909]: I1126 07:21:28.806785 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wfpgz" podStartSLOduration=2.806750422 podStartE2EDuration="2.806750422s" podCreationTimestamp="2025-11-26 07:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:28.793198043 +0000 UTC m=+1260.939409219" watchObservedRunningTime="2025-11-26 07:21:28.806750422 +0000 UTC m=+1260.952961598" Nov 26 07:21:29 crc kubenswrapper[4909]: I1126 07:21:29.792810 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerStarted","Data":"2a498b57189a43f970e29d0e8040bbe8756423b8463ab0d366df0dad3c6b6fa0"} Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.161861 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.292587 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4nnzg"] Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.292892 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" podUID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerName="dnsmasq-dns" containerID="cri-o://5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b" gracePeriod=10 Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.770094 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.805952 4909 generic.go:334] "Generic (PLEG): container finished" podID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerID="5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b" exitCode=0 Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.806143 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.806162 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" event={"ID":"a92c7a14-df9b-4a85-a8dd-a8039a2cb928","Type":"ContainerDied","Data":"5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b"} Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.806236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4nnzg" event={"ID":"a92c7a14-df9b-4a85-a8dd-a8039a2cb928","Type":"ContainerDied","Data":"bc3251fd16351eb9a98997b1c36af5e33751700f34be8317be284de25cd3d029"} Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.806260 4909 scope.go:117] "RemoveContainer" containerID="5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.816835 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerStarted","Data":"51f26cf80d8da853fb9da8dc0fafd164d9ce6124c41fe4293c3704ab70a1c633"} Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.829121 4909 scope.go:117] "RemoveContainer" containerID="ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.890530 4909 scope.go:117] "RemoveContainer" containerID="5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b" Nov 26 07:21:30 crc kubenswrapper[4909]: E1126 07:21:30.893142 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b\": container with ID starting with 5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b not found: ID does not exist" containerID="5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.893214 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b"} err="failed to get container status \"5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b\": rpc error: code = NotFound desc = could not find container \"5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b\": container with ID starting with 5686d48aa452a4864ae6e68a71febc669a3442c1a2913b9784dcbd8bd618157b not found: ID does not exist" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.893262 4909 scope.go:117] "RemoveContainer" containerID="ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21" Nov 26 07:21:30 crc kubenswrapper[4909]: E1126 07:21:30.896979 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21\": container with ID starting with ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21 not found: ID does not exist" containerID="ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.897019 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21"} err="failed to get container status \"ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21\": rpc error: code = NotFound desc = could not find container \"ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21\": container with ID starting with ce759a151efc698243edf6a265214690898c22a0fd1adafe7bd0f44a65e7fb21 not found: ID does not exist" Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.938646 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-nb\") pod \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.938773 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-svc\") pod \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.938823 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-config\") pod \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.938856 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-sb\") pod \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.938941 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7k4z\" (UniqueName: \"kubernetes.io/projected/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-kube-api-access-m7k4z\") pod \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.939113 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-swift-storage-0\") pod \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\" (UID: \"a92c7a14-df9b-4a85-a8dd-a8039a2cb928\") " Nov 26 07:21:30 crc kubenswrapper[4909]: I1126 07:21:30.943038 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-kube-api-access-m7k4z" (OuterVolumeSpecName: "kube-api-access-m7k4z") pod "a92c7a14-df9b-4a85-a8dd-a8039a2cb928" (UID: "a92c7a14-df9b-4a85-a8dd-a8039a2cb928"). InnerVolumeSpecName "kube-api-access-m7k4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.042108 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7k4z\" (UniqueName: \"kubernetes.io/projected/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-kube-api-access-m7k4z\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.067512 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a92c7a14-df9b-4a85-a8dd-a8039a2cb928" (UID: "a92c7a14-df9b-4a85-a8dd-a8039a2cb928"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.091119 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a92c7a14-df9b-4a85-a8dd-a8039a2cb928" (UID: "a92c7a14-df9b-4a85-a8dd-a8039a2cb928"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.091863 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a92c7a14-df9b-4a85-a8dd-a8039a2cb928" (UID: "a92c7a14-df9b-4a85-a8dd-a8039a2cb928"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.119164 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-config" (OuterVolumeSpecName: "config") pod "a92c7a14-df9b-4a85-a8dd-a8039a2cb928" (UID: "a92c7a14-df9b-4a85-a8dd-a8039a2cb928"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.119479 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a92c7a14-df9b-4a85-a8dd-a8039a2cb928" (UID: "a92c7a14-df9b-4a85-a8dd-a8039a2cb928"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.146734 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.146765 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.146776 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.146784 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.146792 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a92c7a14-df9b-4a85-a8dd-a8039a2cb928-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.448406 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4nnzg"] Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.462036 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4nnzg"] Nov 26 07:21:31 crc kubenswrapper[4909]: I1126 07:21:31.831361 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerStarted","Data":"043f2d799dc68a009facf1b7538ebe9df2d186af807ec3bf0beb95026499894c"} Nov 26 07:21:32 crc kubenswrapper[4909]: I1126 07:21:32.513920 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" path="/var/lib/kubelet/pods/a92c7a14-df9b-4a85-a8dd-a8039a2cb928/volumes" Nov 26 07:21:32 crc kubenswrapper[4909]: I1126 07:21:32.845992 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerStarted","Data":"d849337821d8217076eb9a9d55645f97c144b965bd6ef5def3a986ec27b0c502"} Nov 26 07:21:32 crc kubenswrapper[4909]: I1126 07:21:32.846480 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 07:21:32 crc kubenswrapper[4909]: I1126 07:21:32.869256 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9679727649999998 podStartE2EDuration="5.869219712s" podCreationTimestamp="2025-11-26 07:21:27 +0000 UTC" firstStartedPulling="2025-11-26 07:21:28.680722675 +0000 UTC m=+1260.826933841" lastFinishedPulling="2025-11-26 07:21:32.581969602 +0000 UTC m=+1264.728180788" observedRunningTime="2025-11-26 07:21:32.865783936 +0000 UTC m=+1265.011995122" watchObservedRunningTime="2025-11-26 07:21:32.869219712 +0000 UTC m=+1265.015430898" Nov 26 07:21:33 crc kubenswrapper[4909]: I1126 07:21:33.860115 4909 generic.go:334] "Generic (PLEG): container finished" podID="85fee0d5-ad66-45f3-8bc3-1820cc137b65" containerID="116eab90b476caef43b56d5a61af14c4c7625f3f13c935ccae4c8e2d51b7d92e" exitCode=0 Nov 26 07:21:33 crc kubenswrapper[4909]: I1126 07:21:33.860220 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wfpgz" event={"ID":"85fee0d5-ad66-45f3-8bc3-1820cc137b65","Type":"ContainerDied","Data":"116eab90b476caef43b56d5a61af14c4c7625f3f13c935ccae4c8e2d51b7d92e"} Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.283468 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.427977 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztk9n\" (UniqueName: \"kubernetes.io/projected/85fee0d5-ad66-45f3-8bc3-1820cc137b65-kube-api-access-ztk9n\") pod \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.428183 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-config-data\") pod \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.428234 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-combined-ca-bundle\") pod \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.428271 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-scripts\") pod \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\" (UID: \"85fee0d5-ad66-45f3-8bc3-1820cc137b65\") " Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.434822 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85fee0d5-ad66-45f3-8bc3-1820cc137b65-kube-api-access-ztk9n" (OuterVolumeSpecName: "kube-api-access-ztk9n") pod "85fee0d5-ad66-45f3-8bc3-1820cc137b65" (UID: "85fee0d5-ad66-45f3-8bc3-1820cc137b65"). InnerVolumeSpecName "kube-api-access-ztk9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.439817 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-scripts" (OuterVolumeSpecName: "scripts") pod "85fee0d5-ad66-45f3-8bc3-1820cc137b65" (UID: "85fee0d5-ad66-45f3-8bc3-1820cc137b65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.478002 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-config-data" (OuterVolumeSpecName: "config-data") pod "85fee0d5-ad66-45f3-8bc3-1820cc137b65" (UID: "85fee0d5-ad66-45f3-8bc3-1820cc137b65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.498578 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85fee0d5-ad66-45f3-8bc3-1820cc137b65" (UID: "85fee0d5-ad66-45f3-8bc3-1820cc137b65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.531367 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.531414 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.531427 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztk9n\" (UniqueName: \"kubernetes.io/projected/85fee0d5-ad66-45f3-8bc3-1820cc137b65-kube-api-access-ztk9n\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.531440 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fee0d5-ad66-45f3-8bc3-1820cc137b65-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.887125 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wfpgz" Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.892152 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wfpgz" event={"ID":"85fee0d5-ad66-45f3-8bc3-1820cc137b65","Type":"ContainerDied","Data":"3984e6cc9d230c32a75bc19343ebe9674c859f2b2d2be38daff00c4241b543fb"} Nov 26 07:21:35 crc kubenswrapper[4909]: I1126 07:21:35.892228 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3984e6cc9d230c32a75bc19343ebe9674c859f2b2d2be38daff00c4241b543fb" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.176500 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.177530 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-log" containerID="cri-o://320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda" gracePeriod=30 Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.177641 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-api" containerID="cri-o://b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c" gracePeriod=30 Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.199701 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.200268 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" containerName="nova-scheduler-scheduler" containerID="cri-o://12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68" gracePeriod=30 Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.220120 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.220502 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-log" containerID="cri-o://4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2" gracePeriod=30 Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.221025 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-metadata" containerID="cri-o://cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6" gracePeriod=30 Nov 26 07:21:36 crc kubenswrapper[4909]: E1126 07:21:36.683104 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:21:36 crc kubenswrapper[4909]: E1126 07:21:36.684617 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:21:36 crc kubenswrapper[4909]: E1126 07:21:36.686679 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:21:36 crc kubenswrapper[4909]: E1126 07:21:36.686749 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" containerName="nova-scheduler-scheduler" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.765064 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.859961 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-public-tls-certs\") pod \"7af36523-50b5-4889-90d0-3cc4cee1d063\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.860372 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af36523-50b5-4889-90d0-3cc4cee1d063-logs\") pod \"7af36523-50b5-4889-90d0-3cc4cee1d063\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.860437 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd9w5\" (UniqueName: \"kubernetes.io/projected/7af36523-50b5-4889-90d0-3cc4cee1d063-kube-api-access-xd9w5\") pod \"7af36523-50b5-4889-90d0-3cc4cee1d063\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.860547 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-config-data\") pod \"7af36523-50b5-4889-90d0-3cc4cee1d063\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.860581 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-internal-tls-certs\") pod \"7af36523-50b5-4889-90d0-3cc4cee1d063\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.860693 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-combined-ca-bundle\") pod \"7af36523-50b5-4889-90d0-3cc4cee1d063\" (UID: \"7af36523-50b5-4889-90d0-3cc4cee1d063\") " Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.860768 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af36523-50b5-4889-90d0-3cc4cee1d063-logs" (OuterVolumeSpecName: "logs") pod "7af36523-50b5-4889-90d0-3cc4cee1d063" (UID: "7af36523-50b5-4889-90d0-3cc4cee1d063"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.861327 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af36523-50b5-4889-90d0-3cc4cee1d063-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.865580 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af36523-50b5-4889-90d0-3cc4cee1d063-kube-api-access-xd9w5" (OuterVolumeSpecName: "kube-api-access-xd9w5") pod "7af36523-50b5-4889-90d0-3cc4cee1d063" (UID: "7af36523-50b5-4889-90d0-3cc4cee1d063"). InnerVolumeSpecName "kube-api-access-xd9w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.890737 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-config-data" (OuterVolumeSpecName: "config-data") pod "7af36523-50b5-4889-90d0-3cc4cee1d063" (UID: "7af36523-50b5-4889-90d0-3cc4cee1d063"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.891144 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7af36523-50b5-4889-90d0-3cc4cee1d063" (UID: "7af36523-50b5-4889-90d0-3cc4cee1d063"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.910321 4909 generic.go:334] "Generic (PLEG): container finished" podID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerID="4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2" exitCode=143 Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.910396 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88","Type":"ContainerDied","Data":"4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2"} Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.918322 4909 generic.go:334] "Generic (PLEG): container finished" podID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerID="b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c" exitCode=0 Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.918370 4909 generic.go:334] "Generic (PLEG): container finished" podID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerID="320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda" exitCode=143 Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.918398 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af36523-50b5-4889-90d0-3cc4cee1d063","Type":"ContainerDied","Data":"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c"} Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.918432 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af36523-50b5-4889-90d0-3cc4cee1d063","Type":"ContainerDied","Data":"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda"} Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.918445 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af36523-50b5-4889-90d0-3cc4cee1d063","Type":"ContainerDied","Data":"6043283d4fb99ce16dcfba26b026d33a1124af6d8ea6524a41241868e502425a"} Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.918465 4909 scope.go:117] "RemoveContainer" containerID="b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.918705 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.930951 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7af36523-50b5-4889-90d0-3cc4cee1d063" (UID: "7af36523-50b5-4889-90d0-3cc4cee1d063"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.948171 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7af36523-50b5-4889-90d0-3cc4cee1d063" (UID: "7af36523-50b5-4889-90d0-3cc4cee1d063"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.948373 4909 scope.go:117] "RemoveContainer" containerID="320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.963222 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.963518 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.963734 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd9w5\" (UniqueName: \"kubernetes.io/projected/7af36523-50b5-4889-90d0-3cc4cee1d063-kube-api-access-xd9w5\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.963866 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.963963 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7af36523-50b5-4889-90d0-3cc4cee1d063-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.972893 4909 scope.go:117] "RemoveContainer" containerID="b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c" Nov 26 07:21:36 crc kubenswrapper[4909]: E1126 07:21:36.973429 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c\": container with ID starting with b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c not found: ID does not exist" containerID="b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.973565 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c"} err="failed to get container status \"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c\": rpc error: code = NotFound desc = could not find container \"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c\": container with ID starting with b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c not found: ID does not exist" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.973731 4909 scope.go:117] "RemoveContainer" containerID="320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda" Nov 26 07:21:36 crc kubenswrapper[4909]: E1126 07:21:36.974077 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda\": container with ID starting with 320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda not found: ID does not exist" containerID="320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.974162 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda"} err="failed to get container status \"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda\": rpc error: code = NotFound desc = could not find container \"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda\": container with ID starting with 320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda not found: ID does not exist" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.974237 4909 scope.go:117] "RemoveContainer" containerID="b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.976017 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c"} err="failed to get container status \"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c\": rpc error: code = NotFound desc = could not find container \"b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c\": container with ID starting with b4e4c0cbe5d1476e400b1fe757eb5dccbdd1e32c0a375108722d177e11e7960c not found: ID does not exist" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.976110 4909 scope.go:117] "RemoveContainer" containerID="320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda" Nov 26 07:21:36 crc kubenswrapper[4909]: I1126 07:21:36.976385 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda"} err="failed to get container status \"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda\": rpc error: code = NotFound desc = could not find container \"320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda\": container with ID starting with 320318698fca32d705f7fbd48c0bb3ff9d07e7a950245c7f1a75bfd286565bda not found: ID does not exist" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.261955 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.281951 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.293687 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:37 crc kubenswrapper[4909]: E1126 07:21:37.294158 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerName="init" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294172 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerName="init" Nov 26 07:21:37 crc kubenswrapper[4909]: E1126 07:21:37.294203 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerName="dnsmasq-dns" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294210 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerName="dnsmasq-dns" Nov 26 07:21:37 crc kubenswrapper[4909]: E1126 07:21:37.294229 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-log" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294236 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-log" Nov 26 07:21:37 crc kubenswrapper[4909]: E1126 07:21:37.294245 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-api" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294253 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-api" Nov 26 07:21:37 crc kubenswrapper[4909]: E1126 07:21:37.294264 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85fee0d5-ad66-45f3-8bc3-1820cc137b65" containerName="nova-manage" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294271 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="85fee0d5-ad66-45f3-8bc3-1820cc137b65" containerName="nova-manage" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294445 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-log" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294463 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="85fee0d5-ad66-45f3-8bc3-1820cc137b65" containerName="nova-manage" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294488 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" containerName="nova-api-api" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.294500 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92c7a14-df9b-4a85-a8dd-a8039a2cb928" containerName="dnsmasq-dns" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.295514 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.300292 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.300745 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.308958 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.319413 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.371078 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-public-tls-certs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.371129 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.371159 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.371178 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2d6c\" (UniqueName: \"kubernetes.io/projected/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-kube-api-access-q2d6c\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.371279 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-logs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.371308 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-config-data\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.473105 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-logs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.473199 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-config-data\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.473371 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-public-tls-certs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.473416 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.473461 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.473492 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2d6c\" (UniqueName: \"kubernetes.io/projected/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-kube-api-access-q2d6c\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.473668 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-logs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.477050 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.477275 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-config-data\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.477766 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.483526 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-public-tls-certs\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.490056 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2d6c\" (UniqueName: \"kubernetes.io/projected/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-kube-api-access-q2d6c\") pod \"nova-api-0\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " pod="openstack/nova-api-0" Nov 26 07:21:37 crc kubenswrapper[4909]: I1126 07:21:37.657445 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:21:38 crc kubenswrapper[4909]: I1126 07:21:38.123127 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:21:38 crc kubenswrapper[4909]: W1126 07:21:38.131777 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76566d98_8a97_4bd6_9a1c_ae8c0eee9d88.slice/crio-2ef100cb9143ba5408365743d3f8bc0712d229d77f99f46a201ecaef463a0656 WatchSource:0}: Error finding container 2ef100cb9143ba5408365743d3f8bc0712d229d77f99f46a201ecaef463a0656: Status 404 returned error can't find the container with id 2ef100cb9143ba5408365743d3f8bc0712d229d77f99f46a201ecaef463a0656 Nov 26 07:21:38 crc kubenswrapper[4909]: I1126 07:21:38.520242 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af36523-50b5-4889-90d0-3cc4cee1d063" path="/var/lib/kubelet/pods/7af36523-50b5-4889-90d0-3cc4cee1d063/volumes" Nov 26 07:21:38 crc kubenswrapper[4909]: I1126 07:21:38.939294 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88","Type":"ContainerStarted","Data":"bd1ad1420e63a2b12ad289deaed447ee9e8f36e1917943a95d281b6236ae4f9e"} Nov 26 07:21:38 crc kubenswrapper[4909]: I1126 07:21:38.939622 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88","Type":"ContainerStarted","Data":"af985331eb5f612c15f9ade45f71e902d86e7e0cdc019c0a34c486877d6504c7"} Nov 26 07:21:38 crc kubenswrapper[4909]: I1126 07:21:38.939639 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88","Type":"ContainerStarted","Data":"2ef100cb9143ba5408365743d3f8bc0712d229d77f99f46a201ecaef463a0656"} Nov 26 07:21:38 crc kubenswrapper[4909]: I1126 07:21:38.962395 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.962370311 podStartE2EDuration="1.962370311s" podCreationTimestamp="2025-11-26 07:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:38.958576115 +0000 UTC m=+1271.104787291" watchObservedRunningTime="2025-11-26 07:21:38.962370311 +0000 UTC m=+1271.108581477" Nov 26 07:21:39 crc kubenswrapper[4909]: I1126 07:21:39.448224 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:53846->10.217.0.193:8775: read: connection reset by peer" Nov 26 07:21:39 crc kubenswrapper[4909]: I1126 07:21:39.448237 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:53854->10.217.0.193:8775: read: connection reset by peer" Nov 26 07:21:39 crc kubenswrapper[4909]: I1126 07:21:39.952855 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:21:39 crc kubenswrapper[4909]: I1126 07:21:39.954195 4909 generic.go:334] "Generic (PLEG): container finished" podID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerID="cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6" exitCode=0 Nov 26 07:21:39 crc kubenswrapper[4909]: I1126 07:21:39.955202 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88","Type":"ContainerDied","Data":"cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6"} Nov 26 07:21:39 crc kubenswrapper[4909]: I1126 07:21:39.955253 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88","Type":"ContainerDied","Data":"7bbce2fd741961eef1b1017c8c69bcb1fcf4837b4e4641986186efd8f4dd117a"} Nov 26 07:21:39 crc kubenswrapper[4909]: I1126 07:21:39.955273 4909 scope.go:117] "RemoveContainer" containerID="cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.000131 4909 scope.go:117] "RemoveContainer" containerID="4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.022547 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-nova-metadata-tls-certs\") pod \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.022629 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-config-data\") pod \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.022717 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-logs\") pod \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.022860 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-combined-ca-bundle\") pod \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.022923 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq9nj\" (UniqueName: \"kubernetes.io/projected/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-kube-api-access-cq9nj\") pod \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\" (UID: \"3c3d8d4c-ae9e-4935-a7fa-3da16867ef88\") " Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.023313 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-logs" (OuterVolumeSpecName: "logs") pod "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" (UID: "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.023743 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.025744 4909 scope.go:117] "RemoveContainer" containerID="cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6" Nov 26 07:21:40 crc kubenswrapper[4909]: E1126 07:21:40.026498 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6\": container with ID starting with cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6 not found: ID does not exist" containerID="cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.026543 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6"} err="failed to get container status \"cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6\": rpc error: code = NotFound desc = could not find container \"cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6\": container with ID starting with cdec2c817ce4e6f1231184c5bb722c21862ef007923ae7a04bdb165c015ac9e6 not found: ID does not exist" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.026569 4909 scope.go:117] "RemoveContainer" containerID="4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2" Nov 26 07:21:40 crc kubenswrapper[4909]: E1126 07:21:40.028027 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2\": container with ID starting with 4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2 not found: ID does not exist" containerID="4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.028054 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2"} err="failed to get container status \"4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2\": rpc error: code = NotFound desc = could not find container \"4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2\": container with ID starting with 4eb4d52e1fb597c322f0fe00e9d82b4e5e34b72c80d9c3373674f340937be5f2 not found: ID does not exist" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.031038 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-kube-api-access-cq9nj" (OuterVolumeSpecName: "kube-api-access-cq9nj") pod "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" (UID: "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88"). InnerVolumeSpecName "kube-api-access-cq9nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.061452 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" (UID: "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.065082 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-config-data" (OuterVolumeSpecName: "config-data") pod "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" (UID: "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.085150 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" (UID: "3c3d8d4c-ae9e-4935-a7fa-3da16867ef88"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.126083 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.126118 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq9nj\" (UniqueName: \"kubernetes.io/projected/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-kube-api-access-cq9nj\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.126132 4909 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.126144 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.964108 4909 generic.go:334] "Generic (PLEG): container finished" podID="6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" containerID="12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68" exitCode=0 Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.964187 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3","Type":"ContainerDied","Data":"12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68"} Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.964463 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3","Type":"ContainerDied","Data":"a8422c5e6cbf19438ea57ac7bd6bb317084212ce1e3371d3159d41df99563262"} Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.964478 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8422c5e6cbf19438ea57ac7bd6bb317084212ce1e3371d3159d41df99563262" Nov 26 07:21:40 crc kubenswrapper[4909]: I1126 07:21:40.966505 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.045251 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.060527 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.080058 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.117954 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:21:41 crc kubenswrapper[4909]: E1126 07:21:41.118560 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-metadata" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.118589 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-metadata" Nov 26 07:21:41 crc kubenswrapper[4909]: E1126 07:21:41.118635 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" containerName="nova-scheduler-scheduler" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.118646 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" containerName="nova-scheduler-scheduler" Nov 26 07:21:41 crc kubenswrapper[4909]: E1126 07:21:41.118675 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-log" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.118700 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-log" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.119020 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" containerName="nova-scheduler-scheduler" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.119066 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-log" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.119090 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" containerName="nova-metadata-metadata" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.120751 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.129141 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.129408 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.144863 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.148452 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdrwx\" (UniqueName: \"kubernetes.io/projected/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-kube-api-access-bdrwx\") pod \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.150208 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-config-data\") pod \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.150358 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-combined-ca-bundle\") pod \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\" (UID: \"6c988744-5b7a-43b4-8d0b-5c34ee90d2d3\") " Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.151150 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.151184 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjw9\" (UniqueName: \"kubernetes.io/projected/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-kube-api-access-msjw9\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.151224 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-logs\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.151277 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.151301 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-config-data\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.165997 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-kube-api-access-bdrwx" (OuterVolumeSpecName: "kube-api-access-bdrwx") pod "6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" (UID: "6c988744-5b7a-43b4-8d0b-5c34ee90d2d3"). InnerVolumeSpecName "kube-api-access-bdrwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.176557 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-config-data" (OuterVolumeSpecName: "config-data") pod "6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" (UID: "6c988744-5b7a-43b4-8d0b-5c34ee90d2d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.193266 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" (UID: "6c988744-5b7a-43b4-8d0b-5c34ee90d2d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.252907 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-logs\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.252990 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.253026 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-config-data\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.253158 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.253180 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjw9\" (UniqueName: \"kubernetes.io/projected/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-kube-api-access-msjw9\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.253240 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.253266 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.253281 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdrwx\" (UniqueName: \"kubernetes.io/projected/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3-kube-api-access-bdrwx\") on node \"crc\" DevicePath \"\"" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.254029 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-logs\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.257427 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.257573 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-config-data\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.257897 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.269363 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjw9\" (UniqueName: \"kubernetes.io/projected/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-kube-api-access-msjw9\") pod \"nova-metadata-0\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.444277 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.907692 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.994567 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62","Type":"ContainerStarted","Data":"1a7910f31f802a31ca5ab351df54a0452561f7fd74cb0bf0679319c74c688dff"} Nov 26 07:21:41 crc kubenswrapper[4909]: I1126 07:21:41.994665 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.048009 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.060085 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.082092 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.083910 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.089182 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.102788 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.206015 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br5ll\" (UniqueName: \"kubernetes.io/projected/db82c9cc-8a13-4751-b93c-d5f9452dea67-kube-api-access-br5ll\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.206647 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.206683 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-config-data\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.307893 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br5ll\" (UniqueName: \"kubernetes.io/projected/db82c9cc-8a13-4751-b93c-d5f9452dea67-kube-api-access-br5ll\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.308021 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.308056 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-config-data\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.311893 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-config-data\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.312112 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.321708 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br5ll\" (UniqueName: \"kubernetes.io/projected/db82c9cc-8a13-4751-b93c-d5f9452dea67-kube-api-access-br5ll\") pod \"nova-scheduler-0\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.405735 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.525303 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3d8d4c-ae9e-4935-a7fa-3da16867ef88" path="/var/lib/kubelet/pods/3c3d8d4c-ae9e-4935-a7fa-3da16867ef88/volumes" Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.526182 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c988744-5b7a-43b4-8d0b-5c34ee90d2d3" path="/var/lib/kubelet/pods/6c988744-5b7a-43b4-8d0b-5c34ee90d2d3/volumes" Nov 26 07:21:42 crc kubenswrapper[4909]: W1126 07:21:42.871461 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb82c9cc_8a13_4751_b93c_d5f9452dea67.slice/crio-234b5a0c0f0caeb956d7e7919ed5a67c88645d77db6b15af01ce4bf55ed861e9 WatchSource:0}: Error finding container 234b5a0c0f0caeb956d7e7919ed5a67c88645d77db6b15af01ce4bf55ed861e9: Status 404 returned error can't find the container with id 234b5a0c0f0caeb956d7e7919ed5a67c88645d77db6b15af01ce4bf55ed861e9 Nov 26 07:21:42 crc kubenswrapper[4909]: I1126 07:21:42.875306 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:21:43 crc kubenswrapper[4909]: I1126 07:21:43.005272 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62","Type":"ContainerStarted","Data":"930ad0d120558b695b7edf3c8655a0626d86ed2453cbe13f094dc27931f585b5"} Nov 26 07:21:43 crc kubenswrapper[4909]: I1126 07:21:43.005358 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62","Type":"ContainerStarted","Data":"7efb83d37c0b27280af33c61d86ae63402ad34c0d0269b1b068ec8e29e729792"} Nov 26 07:21:43 crc kubenswrapper[4909]: I1126 07:21:43.008952 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db82c9cc-8a13-4751-b93c-d5f9452dea67","Type":"ContainerStarted","Data":"234b5a0c0f0caeb956d7e7919ed5a67c88645d77db6b15af01ce4bf55ed861e9"} Nov 26 07:21:43 crc kubenswrapper[4909]: I1126 07:21:43.033426 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.033394381 podStartE2EDuration="2.033394381s" podCreationTimestamp="2025-11-26 07:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:43.025216942 +0000 UTC m=+1275.171428108" watchObservedRunningTime="2025-11-26 07:21:43.033394381 +0000 UTC m=+1275.179605547" Nov 26 07:21:44 crc kubenswrapper[4909]: I1126 07:21:44.020678 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db82c9cc-8a13-4751-b93c-d5f9452dea67","Type":"ContainerStarted","Data":"0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9"} Nov 26 07:21:44 crc kubenswrapper[4909]: I1126 07:21:44.041866 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.041840898 podStartE2EDuration="2.041840898s" podCreationTimestamp="2025-11-26 07:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:21:44.037803334 +0000 UTC m=+1276.184014500" watchObservedRunningTime="2025-11-26 07:21:44.041840898 +0000 UTC m=+1276.188052064" Nov 26 07:21:46 crc kubenswrapper[4909]: I1126 07:21:46.444822 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 07:21:46 crc kubenswrapper[4909]: I1126 07:21:46.445240 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 07:21:47 crc kubenswrapper[4909]: I1126 07:21:47.406323 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 07:21:47 crc kubenswrapper[4909]: I1126 07:21:47.659132 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 07:21:47 crc kubenswrapper[4909]: I1126 07:21:47.659200 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 07:21:48 crc kubenswrapper[4909]: I1126 07:21:48.669797 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:48 crc kubenswrapper[4909]: I1126 07:21:48.669832 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:51 crc kubenswrapper[4909]: I1126 07:21:51.444962 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 07:21:51 crc kubenswrapper[4909]: I1126 07:21:51.445457 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 07:21:52 crc kubenswrapper[4909]: I1126 07:21:52.406941 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 07:21:52 crc kubenswrapper[4909]: I1126 07:21:52.446424 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 07:21:52 crc kubenswrapper[4909]: I1126 07:21:52.461790 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:52 crc kubenswrapper[4909]: I1126 07:21:52.461800 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 07:21:53 crc kubenswrapper[4909]: I1126 07:21:53.177849 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 07:21:57 crc kubenswrapper[4909]: I1126 07:21:57.672893 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 07:21:57 crc kubenswrapper[4909]: I1126 07:21:57.674684 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 07:21:57 crc kubenswrapper[4909]: I1126 07:21:57.674755 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 07:21:57 crc kubenswrapper[4909]: I1126 07:21:57.692390 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 07:21:58 crc kubenswrapper[4909]: I1126 07:21:58.178376 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 07:21:58 crc kubenswrapper[4909]: I1126 07:21:58.184939 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 26 07:21:58 crc kubenswrapper[4909]: I1126 07:21:58.193119 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 07:22:01 crc kubenswrapper[4909]: I1126 07:22:01.450816 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 07:22:01 crc kubenswrapper[4909]: I1126 07:22:01.453421 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 07:22:01 crc kubenswrapper[4909]: I1126 07:22:01.470423 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 07:22:01 crc kubenswrapper[4909]: I1126 07:22:01.471817 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 07:22:07 crc kubenswrapper[4909]: I1126 07:22:07.302113 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:22:07 crc kubenswrapper[4909]: I1126 07:22:07.302859 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.231672 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.232546 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" containerName="openstackclient" containerID="cri-o://5490bde309c9533492858121c3c3979518f2ae4d1909c148a36b275cb690a58a" gracePeriod=2 Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.247582 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.300444 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sxvh6"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.325630 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placemente922-account-delete-ccdmj"] Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.326072 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" containerName="openstackclient" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.326084 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" containerName="openstackclient" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.326307 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" containerName="openstackclient" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.327018 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placemente922-account-delete-ccdmj" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.350445 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8vd9g"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.354055 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-8vd9g" podUID="74ffd03c-7228-474b-830e-01f0be8998d5" containerName="openstack-network-exporter" containerID="cri-o://5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d" gracePeriod=30 Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.381807 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placemente922-account-delete-ccdmj"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.409601 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.409936 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="ovn-northd" containerID="cri-o://c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" gracePeriod=30 Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.410443 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="openstack-network-exporter" containerID="cri-o://961f74545256d62a34bc75e2a3d148f6d0a38e6f3d41c1cc128a6f4f1eccd8f1" gracePeriod=30 Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.440716 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-5f8k9"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.460360 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbicana478-account-delete-9mdlb"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.462163 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicana478-account-delete-9mdlb" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.520338 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmj8w\" (UniqueName: \"kubernetes.io/projected/bc036cf2-920c-4497-bec8-cbf0d293c33a-kube-api-access-rmj8w\") pod \"placemente922-account-delete-ccdmj\" (UID: \"bc036cf2-920c-4497-bec8-cbf0d293c33a\") " pod="openstack/placemente922-account-delete-ccdmj" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.540638 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicana478-account-delete-9mdlb"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.563694 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.625885 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js44f\" (UniqueName: \"kubernetes.io/projected/89513daa-9a0c-4888-9a33-0ba9c007da26-kube-api-access-js44f\") pod \"barbicana478-account-delete-9mdlb\" (UID: \"89513daa-9a0c-4888-9a33-0ba9c007da26\") " pod="openstack/barbicana478-account-delete-9mdlb" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.626021 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmj8w\" (UniqueName: \"kubernetes.io/projected/bc036cf2-920c-4497-bec8-cbf0d293c33a-kube-api-access-rmj8w\") pod \"placemente922-account-delete-ccdmj\" (UID: \"bc036cf2-920c-4497-bec8-cbf0d293c33a\") " pod="openstack/placemente922-account-delete-ccdmj" Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.627544 4909 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.627666 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:19.127638297 +0000 UTC m=+1311.273849553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-config-data" not found Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.627950 4909 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.627993 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:19.127981466 +0000 UTC m=+1311.274192732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-scripts" not found Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.659439 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance2354-account-delete-qnktx"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.660739 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance2354-account-delete-qnktx" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.674153 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance2354-account-delete-qnktx"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.683383 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmj8w\" (UniqueName: \"kubernetes.io/projected/bc036cf2-920c-4497-bec8-cbf0d293c33a-kube-api-access-rmj8w\") pod \"placemente922-account-delete-ccdmj\" (UID: \"bc036cf2-920c-4497-bec8-cbf0d293c33a\") " pod="openstack/placemente922-account-delete-ccdmj" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.735256 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js44f\" (UniqueName: \"kubernetes.io/projected/89513daa-9a0c-4888-9a33-0ba9c007da26-kube-api-access-js44f\") pod \"barbicana478-account-delete-9mdlb\" (UID: \"89513daa-9a0c-4888-9a33-0ba9c007da26\") " pod="openstack/barbicana478-account-delete-9mdlb" Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.735894 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.735973 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data podName:37fbb13e-7e2e-451d-af0e-a648c4cde4c2 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:19.235932961 +0000 UTC m=+1311.382144127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data") pod "rabbitmq-cell1-server-0" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2") : configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.764841 4909 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-sxvh6" message=< Nov 26 07:22:18 crc kubenswrapper[4909]: Exiting ovn-controller (1) [ OK ] Nov 26 07:22:18 crc kubenswrapper[4909]: > Nov 26 07:22:18 crc kubenswrapper[4909]: E1126 07:22:18.764892 4909 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-sxvh6" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerName="ovn-controller" containerID="cri-o://93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.764932 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-sxvh6" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerName="ovn-controller" containerID="cri-o://93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199" gracePeriod=30 Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.804313 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js44f\" (UniqueName: \"kubernetes.io/projected/89513daa-9a0c-4888-9a33-0ba9c007da26-kube-api-access-js44f\") pod \"barbicana478-account-delete-9mdlb\" (UID: \"89513daa-9a0c-4888-9a33-0ba9c007da26\") " pod="openstack/barbicana478-account-delete-9mdlb" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.822416 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi1964-account-delete-vmj4p"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.823687 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1964-account-delete-vmj4p" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.859759 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98cp\" (UniqueName: \"kubernetes.io/projected/ba28de5d-9cc0-475b-9eb5-6fce621ca4e2-kube-api-access-p98cp\") pod \"glance2354-account-delete-qnktx\" (UID: \"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2\") " pod="openstack/glance2354-account-delete-qnktx" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.904837 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi1964-account-delete-vmj4p"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.939290 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicana478-account-delete-9mdlb" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.964363 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell18b90-account-delete-n7278"] Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.966070 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell18b90-account-delete-n7278" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.967455 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7ndl\" (UniqueName: \"kubernetes.io/projected/c468acce-9341-4eff-94c9-f38b74077fdf-kube-api-access-c7ndl\") pod \"novaapi1964-account-delete-vmj4p\" (UID: \"c468acce-9341-4eff-94c9-f38b74077fdf\") " pod="openstack/novaapi1964-account-delete-vmj4p" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.967482 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p98cp\" (UniqueName: \"kubernetes.io/projected/ba28de5d-9cc0-475b-9eb5-6fce621ca4e2-kube-api-access-p98cp\") pod \"glance2354-account-delete-qnktx\" (UID: \"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2\") " pod="openstack/glance2354-account-delete-qnktx" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.976611 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placemente922-account-delete-ccdmj" Nov 26 07:22:18 crc kubenswrapper[4909]: I1126 07:22:18.991777 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell18b90-account-delete-n7278"] Nov 26 07:22:19 crc kubenswrapper[4909]: I1126 07:22:19.009205 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p98cp\" (UniqueName: \"kubernetes.io/projected/ba28de5d-9cc0-475b-9eb5-6fce621ca4e2-kube-api-access-p98cp\") pod \"glance2354-account-delete-qnktx\" (UID: \"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2\") " pod="openstack/glance2354-account-delete-qnktx" Nov 26 07:22:19 crc kubenswrapper[4909]: I1126 07:22:19.066163 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dk2k6"] Nov 26 07:22:19 crc kubenswrapper[4909]: I1126 07:22:19.072873 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhz7\" (UniqueName: \"kubernetes.io/projected/93d42f19-cfd6-4b06-aaf2-8febb4bd3945-kube-api-access-swhz7\") pod \"novacell18b90-account-delete-n7278\" (UID: \"93d42f19-cfd6-4b06-aaf2-8febb4bd3945\") " pod="openstack/novacell18b90-account-delete-n7278" Nov 26 07:22:19 crc kubenswrapper[4909]: I1126 07:22:19.073022 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7ndl\" (UniqueName: \"kubernetes.io/projected/c468acce-9341-4eff-94c9-f38b74077fdf-kube-api-access-c7ndl\") pod \"novaapi1964-account-delete-vmj4p\" (UID: \"c468acce-9341-4eff-94c9-f38b74077fdf\") " pod="openstack/novaapi1964-account-delete-vmj4p" Nov 26 07:22:19 crc kubenswrapper[4909]: I1126 07:22:19.118410 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dk2k6"] Nov 26 07:22:19 crc kubenswrapper[4909]: I1126 07:22:19.121444 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7ndl\" (UniqueName: \"kubernetes.io/projected/c468acce-9341-4eff-94c9-f38b74077fdf-kube-api-access-c7ndl\") pod \"novaapi1964-account-delete-vmj4p\" (UID: \"c468acce-9341-4eff-94c9-f38b74077fdf\") " pod="openstack/novaapi1964-account-delete-vmj4p" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.185519 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhz7\" (UniqueName: \"kubernetes.io/projected/93d42f19-cfd6-4b06-aaf2-8febb4bd3945-kube-api-access-swhz7\") pod \"novacell18b90-account-delete-n7278\" (UID: \"93d42f19-cfd6-4b06-aaf2-8febb4bd3945\") " pod="openstack/novacell18b90-account-delete-n7278" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.186036 4909 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.186087 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:20.186067935 +0000 UTC m=+1312.332279101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-scripts" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.186356 4909 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.186379 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:20.186371933 +0000 UTC m=+1312.332583099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.191363 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell050ac-account-delete-2nvln"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.196499 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell050ac-account-delete-2nvln" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.220633 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhz7\" (UniqueName: \"kubernetes.io/projected/93d42f19-cfd6-4b06-aaf2-8febb4bd3945-kube-api-access-swhz7\") pod \"novacell18b90-account-delete-n7278\" (UID: \"93d42f19-cfd6-4b06-aaf2-8febb4bd3945\") " pod="openstack/novacell18b90-account-delete-n7278" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.224415 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell050ac-account-delete-2nvln"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.243823 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kfqlk"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.252054 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kfqlk"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.261453 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutroncf0d-account-delete-8ssk2"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.262790 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutroncf0d-account-delete-8ssk2" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.274082 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance2354-account-delete-qnktx" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.275038 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutroncf0d-account-delete-8ssk2"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.284886 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.285299 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="openstack-network-exporter" containerID="cri-o://bbec5715c551f88ea231efe57c9124f91b9b77cfb5ebea4c9e465ffb097ed605" gracePeriod=300 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.290895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98zv\" (UniqueName: \"kubernetes.io/projected/793c680c-7448-478e-bbf4-bca888e7e4c9-kube-api-access-w98zv\") pod \"novacell050ac-account-delete-2nvln\" (UID: \"793c680c-7448-478e-bbf4-bca888e7e4c9\") " pod="openstack/novacell050ac-account-delete-2nvln" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.292319 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.292676 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.292725 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data podName:37fbb13e-7e2e-451d-af0e-a648c4cde4c2 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:20.292709044 +0000 UTC m=+1312.438920210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data") pod "rabbitmq-cell1-server-0" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2") : configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.323444 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1964-account-delete-vmj4p" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.370083 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell18b90-account-delete-n7278" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.377407 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jlpgv"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.395537 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98zv\" (UniqueName: \"kubernetes.io/projected/793c680c-7448-478e-bbf4-bca888e7e4c9-kube-api-access-w98zv\") pod \"novacell050ac-account-delete-2nvln\" (UID: \"793c680c-7448-478e-bbf4-bca888e7e4c9\") " pod="openstack/novacell050ac-account-delete-2nvln" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.395675 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhqn\" (UniqueName: \"kubernetes.io/projected/944eaf5b-6552-409a-a932-7fceaf182ff7-kube-api-access-dkhqn\") pod \"neutroncf0d-account-delete-8ssk2\" (UID: \"944eaf5b-6552-409a-a932-7fceaf182ff7\") " pod="openstack/neutroncf0d-account-delete-8ssk2" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.401714 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.401765 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data podName:e827f391-2fcb-4758-ae5e-deef3c712e53 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:19.901745199 +0000 UTC m=+1312.047956365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data") pod "rabbitmq-server-0" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53") : configmap "rabbitmq-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.401914 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jlpgv"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.432581 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98zv\" (UniqueName: \"kubernetes.io/projected/793c680c-7448-478e-bbf4-bca888e7e4c9-kube-api-access-w98zv\") pod \"novacell050ac-account-delete-2nvln\" (UID: \"793c680c-7448-478e-bbf4-bca888e7e4c9\") " pod="openstack/novacell050ac-account-delete-2nvln" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.451409 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="ovsdbserver-sb" containerID="cri-o://6799943ef5fdd232348a851fd58224528db5e2724e6f622defc9357dcbfa014e" gracePeriod=300 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.454681 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8vd9g_74ffd03c-7228-474b-830e-01f0be8998d5/openstack-network-exporter/0.log" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.454763 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.463556 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pp2qf"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.476944 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pp2qf"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.489705 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.490300 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="openstack-network-exporter" containerID="cri-o://4daadf8992d503de7a3e7bc9d8d0fd7d0f19bd5aa98704266a78b9b676826679" gracePeriod=300 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.505088 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovs-rundir\") pod \"74ffd03c-7228-474b-830e-01f0be8998d5\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.505160 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-metrics-certs-tls-certs\") pod \"74ffd03c-7228-474b-830e-01f0be8998d5\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.505230 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-combined-ca-bundle\") pod \"74ffd03c-7228-474b-830e-01f0be8998d5\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.505291 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dbgn\" (UniqueName: \"kubernetes.io/projected/74ffd03c-7228-474b-830e-01f0be8998d5-kube-api-access-4dbgn\") pod \"74ffd03c-7228-474b-830e-01f0be8998d5\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.505332 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ffd03c-7228-474b-830e-01f0be8998d5-config\") pod \"74ffd03c-7228-474b-830e-01f0be8998d5\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.505352 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovn-rundir\") pod \"74ffd03c-7228-474b-830e-01f0be8998d5\" (UID: \"74ffd03c-7228-474b-830e-01f0be8998d5\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.505907 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkhqn\" (UniqueName: \"kubernetes.io/projected/944eaf5b-6552-409a-a932-7fceaf182ff7-kube-api-access-dkhqn\") pod \"neutroncf0d-account-delete-8ssk2\" (UID: \"944eaf5b-6552-409a-a932-7fceaf182ff7\") " pod="openstack/neutroncf0d-account-delete-8ssk2" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.506299 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "74ffd03c-7228-474b-830e-01f0be8998d5" (UID: "74ffd03c-7228-474b-830e-01f0be8998d5"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.510557 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dtqxp"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.510845 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerName="dnsmasq-dns" containerID="cri-o://b3791de9e2272db503f056aa00b550a13134003b62ae8874f22a308095ddd4a6" gracePeriod=10 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.512106 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "74ffd03c-7228-474b-830e-01f0be8998d5" (UID: "74ffd03c-7228-474b-830e-01f0be8998d5"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.513019 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74ffd03c-7228-474b-830e-01f0be8998d5-config" (OuterVolumeSpecName: "config") pod "74ffd03c-7228-474b-830e-01f0be8998d5" (UID: "74ffd03c-7228-474b-830e-01f0be8998d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.520136 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ffd03c-7228-474b-830e-01f0be8998d5-kube-api-access-4dbgn" (OuterVolumeSpecName: "kube-api-access-4dbgn") pod "74ffd03c-7228-474b-830e-01f0be8998d5" (UID: "74ffd03c-7228-474b-830e-01f0be8998d5"). InnerVolumeSpecName "kube-api-access-4dbgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.529278 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wfpgz"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.548718 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkhqn\" (UniqueName: \"kubernetes.io/projected/944eaf5b-6552-409a-a932-7fceaf182ff7-kube-api-access-dkhqn\") pod \"neutroncf0d-account-delete-8ssk2\" (UID: \"944eaf5b-6552-409a-a932-7fceaf182ff7\") " pod="openstack/neutroncf0d-account-delete-8ssk2" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.552565 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell050ac-account-delete-2nvln" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.553234 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wfpgz"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.573012 4909 generic.go:334] "Generic (PLEG): container finished" podID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerID="93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199" exitCode=0 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.573143 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sxvh6" event={"ID":"a74aad93-58f0-4023-95e3-3f0e92558f84","Type":"ContainerDied","Data":"93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.587158 4909 generic.go:334] "Generic (PLEG): container finished" podID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerID="bbec5715c551f88ea231efe57c9124f91b9b77cfb5ebea4c9e465ffb097ed605" exitCode=2 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.587222 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5ea1ebb8-6827-4f0b-a055-3b77e18755ac","Type":"ContainerDied","Data":"bbec5715c551f88ea231efe57c9124f91b9b77cfb5ebea4c9e465ffb097ed605"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.589899 4909 generic.go:334] "Generic (PLEG): container finished" podID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerID="961f74545256d62a34bc75e2a3d148f6d0a38e6f3d41c1cc128a6f4f1eccd8f1" exitCode=2 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.589952 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1f85aa19-7a2b-461e-9f33-6ba3f3261da4","Type":"ContainerDied","Data":"961f74545256d62a34bc75e2a3d148f6d0a38e6f3d41c1cc128a6f4f1eccd8f1"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.591546 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8vd9g_74ffd03c-7228-474b-830e-01f0be8998d5/openstack-network-exporter/0.log" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.591582 4909 generic.go:334] "Generic (PLEG): container finished" podID="74ffd03c-7228-474b-830e-01f0be8998d5" containerID="5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d" exitCode=2 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.591672 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8vd9g" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.591669 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8vd9g" event={"ID":"74ffd03c-7228-474b-830e-01f0be8998d5","Type":"ContainerDied","Data":"5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.591771 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8vd9g" event={"ID":"74ffd03c-7228-474b-830e-01f0be8998d5","Type":"ContainerDied","Data":"98013ce43edf17179601ca531c0127bc242978353928c0517537fa99294165a3"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.591805 4909 scope.go:117] "RemoveContainer" containerID="5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.614128 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutroncf0d-account-delete-8ssk2" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.615182 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ffd03c-7228-474b-830e-01f0be8998d5-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.615207 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.615223 4909 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/74ffd03c-7228-474b-830e-01f0be8998d5-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.615234 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dbgn\" (UniqueName: \"kubernetes.io/projected/74ffd03c-7228-474b-830e-01f0be8998d5-kube-api-access-4dbgn\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.682628 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="ovsdbserver-nb" containerID="cri-o://a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780" gracePeriod=300 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.695758 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74ffd03c-7228-474b-830e-01f0be8998d5" (UID: "74ffd03c-7228-474b-830e-01f0be8998d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.721030 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.735466 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" containerID="cri-o://fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" gracePeriod=29 Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.735836 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.741714 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.744319 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.744397 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="ovsdbserver-nb" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.751674 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zb5p"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.755791 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.767181 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.786876 4909 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 26 07:22:20 crc kubenswrapper[4909]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 26 07:22:20 crc kubenswrapper[4909]: + source /usr/local/bin/container-scripts/functions Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNBridge=br-int Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNRemote=tcp:localhost:6642 Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNEncapType=geneve Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNAvailabilityZones= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ EnableChassisAsGateway=true Nov 26 07:22:20 crc kubenswrapper[4909]: ++ PhysicalNetworks= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNHostName= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 26 07:22:20 crc kubenswrapper[4909]: ++ ovs_dir=/var/lib/openvswitch Nov 26 07:22:20 crc kubenswrapper[4909]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 26 07:22:20 crc kubenswrapper[4909]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 26 07:22:20 crc kubenswrapper[4909]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + sleep 0.5 Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + sleep 0.5 Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + cleanup_ovsdb_server_semaphore Nov 26 07:22:20 crc kubenswrapper[4909]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 26 07:22:20 crc kubenswrapper[4909]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 26 07:22:20 crc kubenswrapper[4909]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-5f8k9" message=< Nov 26 07:22:20 crc kubenswrapper[4909]: Exiting ovsdb-server (5) [ OK ] Nov 26 07:22:20 crc kubenswrapper[4909]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 26 07:22:20 crc kubenswrapper[4909]: + source /usr/local/bin/container-scripts/functions Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNBridge=br-int Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNRemote=tcp:localhost:6642 Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNEncapType=geneve Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNAvailabilityZones= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ EnableChassisAsGateway=true Nov 26 07:22:20 crc kubenswrapper[4909]: ++ PhysicalNetworks= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNHostName= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 26 07:22:20 crc kubenswrapper[4909]: ++ ovs_dir=/var/lib/openvswitch Nov 26 07:22:20 crc kubenswrapper[4909]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 26 07:22:20 crc kubenswrapper[4909]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 26 07:22:20 crc kubenswrapper[4909]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + sleep 0.5 Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + sleep 0.5 Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + cleanup_ovsdb_server_semaphore Nov 26 07:22:20 crc kubenswrapper[4909]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 26 07:22:20 crc kubenswrapper[4909]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 26 07:22:20 crc kubenswrapper[4909]: > Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.786918 4909 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 26 07:22:20 crc kubenswrapper[4909]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 26 07:22:20 crc kubenswrapper[4909]: + source /usr/local/bin/container-scripts/functions Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNBridge=br-int Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNRemote=tcp:localhost:6642 Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNEncapType=geneve Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNAvailabilityZones= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ EnableChassisAsGateway=true Nov 26 07:22:20 crc kubenswrapper[4909]: ++ PhysicalNetworks= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ OVNHostName= Nov 26 07:22:20 crc kubenswrapper[4909]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 26 07:22:20 crc kubenswrapper[4909]: ++ ovs_dir=/var/lib/openvswitch Nov 26 07:22:20 crc kubenswrapper[4909]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 26 07:22:20 crc kubenswrapper[4909]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 26 07:22:20 crc kubenswrapper[4909]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + sleep 0.5 Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + sleep 0.5 Nov 26 07:22:20 crc kubenswrapper[4909]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 26 07:22:20 crc kubenswrapper[4909]: + cleanup_ovsdb_server_semaphore Nov 26 07:22:20 crc kubenswrapper[4909]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 26 07:22:20 crc kubenswrapper[4909]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 26 07:22:20 crc kubenswrapper[4909]: > pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" containerID="cri-o://2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.786966 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" containerID="cri-o://2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" gracePeriod=29 Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.787295 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.787390 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="ovn-northd" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.802511 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.802715 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199 is running failed: container process not found" containerID="93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.802856 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.804143 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199 is running failed: container process not found" containerID="93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.804194 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.804626 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199 is running failed: container process not found" containerID="93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.804713 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-sxvh6" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerName="ovn-controller" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.804786 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.804803 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.805478 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.814276 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "74ffd03c-7228-474b-830e-01f0be8998d5" (UID: "74ffd03c-7228-474b-830e-01f0be8998d5"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.829560 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ffd03c-7228-474b-830e-01f0be8998d5-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.830118 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.830213 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.852258 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zb5p"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.874262 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5fc4c8f8d8-g2ccp"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.874586 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5fc4c8f8d8-g2ccp" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-log" containerID="cri-o://e41849efae31b0c8581f7c9f6ee28750c66a400db628e1800e2864b8a75f5b77" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.874939 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5fc4c8f8d8-g2ccp" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-api" containerID="cri-o://a05a2e8d981ebb4cc5877598dba394fd26e24d76c6a72edee8536bc2f0214b86" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.937526 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:19.937638 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data podName:e827f391-2fcb-4758-ae5e-deef3c712e53 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:20.93761535 +0000 UTC m=+1313.083826516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data") pod "rabbitmq-server-0" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53") : configmap "rabbitmq-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.961177 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gdl8k"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:19.991311 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gdl8k"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.064759 4909 scope.go:117] "RemoveContainer" containerID="5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.072643 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d\": container with ID starting with 5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d not found: ID does not exist" containerID="5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.072695 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d"} err="failed to get container status \"5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d\": rpc error: code = NotFound desc = could not find container \"5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d\": container with ID starting with 5a0ae87c19a3c809739e63cbbffcab49cb7a3f1ddafee8399383096b2aaba48d not found: ID does not exist" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.140022 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-g4ljp"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.194427 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.316811 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ea1ebb8_6827_4f0b_a055_3b77e18755ac.slice/crio-6799943ef5fdd232348a851fd58224528db5e2724e6f622defc9357dcbfa014e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbd9367_38fb_4a1d_b818_c0bd4893c0de.slice/crio-b3791de9e2272db503f056aa00b550a13134003b62ae8874f22a308095ddd4a6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb793112e_ecec_4fb1_b06a_3bf4245af24b.slice/crio-conmon-2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedd763da_b7ea_4a61_846c_029eb54d9a08.slice/crio-conmon-4daadf8992d503de7a3e7bc9d8d0fd7d0f19bd5aa98704266a78b9b676826679.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbd9367_38fb_4a1d_b818_c0bd4893c0de.slice/crio-conmon-b3791de9e2272db503f056aa00b550a13134003b62ae8874f22a308095ddd4a6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedd763da_b7ea_4a61_846c_029eb54d9a08.slice/crio-a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedd763da_b7ea_4a61_846c_029eb54d9a08.slice/crio-4daadf8992d503de7a3e7bc9d8d0fd7d0f19bd5aa98704266a78b9b676826679.scope\": RecentStats: unable to find data in memory cache]" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.322117 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-g4ljp"] Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.334146 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.334170 4909 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.334227 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data podName:37fbb13e-7e2e-451d-af0e-a648c4cde4c2 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:22.334208097 +0000 UTC m=+1314.480419263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data") pod "rabbitmq-cell1-server-0" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2") : configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.334245 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:22.334238218 +0000 UTC m=+1314.480449384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-scripts" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.334254 4909 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: E1126 07:22:20.334292 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:22.334268729 +0000 UTC m=+1314.480479895 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-config-data" not found Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.371976 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.372447 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-server" containerID="cri-o://dced4a3ee055a4cc6d79d52944605e70abd5ed1457b4c96ba7b9b9ae67562306" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.372919 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="swift-recon-cron" containerID="cri-o://1b1f9c5a8d3224d9a8311e314bf6dc4a0fbdc6f393e2987d8955a46b68d1bada" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.372992 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="rsync" containerID="cri-o://96997ae8444f96d36126a818d42e9ce0882a0ec678fa1686cadf36da925626d7" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373051 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-expirer" containerID="cri-o://07c32dca92ef9af6a5b2f1da9964db33a8d49c3a4d846c0cb66461ab457f596f" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373101 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-updater" containerID="cri-o://deb5869801f78aa72238df2b9719a9337500c7d4fe3cef9fd57bfea3f27a9500" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373151 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-auditor" containerID="cri-o://93d0e136e4522423ec6013c050a8ff1959c79f2b6857b7223d3792246312b6bd" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373201 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-replicator" containerID="cri-o://e0d087da0faef2436ea0b5dc36389de6f9bcae11c0745372234e7e2e2515dc1e" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373239 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-server" containerID="cri-o://ae51b3e0f8704221eb8fa99538d9b20411e525c3d485412522af25ca33ee293d" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373276 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-updater" containerID="cri-o://12254f31c6a379da5fd4e34c45fd68057888fa099c912fe12dd9c1a881206bdf" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373317 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-auditor" containerID="cri-o://d57d935982096fce0c90d166aa9755252570903363ed795caa7ea306a1c4a125" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373354 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-replicator" containerID="cri-o://5d4ff632621d60ecaadd162fdb8816be897785eaad8d97513f60206f89fa1487" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373484 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-server" containerID="cri-o://c76f25e43175f3d693010c16bd1b421da9f361eea4704ff1766122084490d5d8" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.373576 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-reaper" containerID="cri-o://a449d7cd0e0553480c704885c8e18a406ff461623be069faf59ed385c2a89148" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.374694 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-replicator" containerID="cri-o://d349b9ce563e6e2048f46f3884eca2d8e3ba6436ecab095b55cfbdff47ed90e8" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.374885 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-auditor" containerID="cri-o://ef85ba50ad3703e23f7fcb4391c0f594c7dc9bc10c9b5ed2ff4ec5998223f89c" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.439500 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.439779 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="cinder-scheduler" containerID="cri-o://83f2fa0df126cd84a93da94a384252310d087fec1c7f6c1abf2c21ba3382de98" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.440155 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="probe" containerID="cri-o://b589d87e51374dd79f69c4819dca7d38374f8adb25ebf560946dbab0a7dc7461" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.465702 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.466225 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-log" containerID="cri-o://6e9c74d44de9181ab6e32d80d4b92f4cc0c240f37302b35ea2fea0bdf1f79435" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.467241 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-httpd" containerID="cri-o://9af326558746ae1f5b6fd43ef25bfc03798d1031c68245c1a4e7bd66e604b033" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.637307 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7" path="/var/lib/kubelet/pods/5ff67bf5-21ab-47bf-8669-7bb4b2e75cb7/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.638619 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85fee0d5-ad66-45f3-8bc3-1820cc137b65" path="/var/lib/kubelet/pods/85fee0d5-ad66-45f3-8bc3-1820cc137b65/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.639903 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb4cd6e-2d04-470f-b900-32a9a30a4137" path="/var/lib/kubelet/pods/8bb4cd6e-2d04-470f-b900-32a9a30a4137/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.641098 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fb3d6d-c540-4c6d-af4d-257226561c47" path="/var/lib/kubelet/pods/94fb3d6d-c540-4c6d-af4d-257226561c47/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.646792 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4bc32bf-2659-4f99-bb10-8ac0617b317c" path="/var/lib/kubelet/pods/a4bc32bf-2659-4f99-bb10-8ac0617b317c/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.660017 4909 generic.go:334] "Generic (PLEG): container finished" podID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerID="b3791de9e2272db503f056aa00b550a13134003b62ae8874f22a308095ddd4a6" exitCode=0 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.663674 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acec782e-7dc5-449d-a3bc-15e6100aa7c6" path="/var/lib/kubelet/pods/acec782e-7dc5-449d-a3bc-15e6100aa7c6/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.664838 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8" path="/var/lib/kubelet/pods/b4fd3bc9-0378-48b1-86fa-f66d5ba70ed8/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.665324 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6421e06-7f96-420b-8aa1-04fa59e832e9" path="/var/lib/kubelet/pods/b6421e06-7f96-420b-8aa1-04fa59e832e9/volumes" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.666059 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" event={"ID":"acbd9367-38fb-4a1d-b818-c0bd4893c0de","Type":"ContainerDied","Data":"b3791de9e2272db503f056aa00b550a13134003b62ae8874f22a308095ddd4a6"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.666089 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8vd9g"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.666108 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-8vd9g"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.666125 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.666375 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-log" containerID="cri-o://346c5a3945f1eaf91c1fcf6d0365d419473a3372fe0402c424978821010165bc" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.667156 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-httpd" containerID="cri-o://c6050544ee612b28a902864fcf0420d3aca003e37b8bf69449abc96dd1260ebc" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.691673 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.691937 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api-log" containerID="cri-o://ffdcc38e7deac196a6a6dc47ac259ef1b3c1eaff9265239fbcdfb5425c3fe186" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.692091 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api" containerID="cri-o://3e62a202acc19dddd034b5dca03867a48ca15be9ff76077f42a4246722cebcdf" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.721163 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zg6rx"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.714148 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_edd763da-b7ea-4a61-846c-029eb54d9a08/ovsdbserver-nb/0.log" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.723956 4909 generic.go:334] "Generic (PLEG): container finished" podID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerID="4daadf8992d503de7a3e7bc9d8d0fd7d0f19bd5aa98704266a78b9b676826679" exitCode=2 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.723980 4909 generic.go:334] "Generic (PLEG): container finished" podID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerID="a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780" exitCode=143 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.724100 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"edd763da-b7ea-4a61-846c-029eb54d9a08","Type":"ContainerDied","Data":"4daadf8992d503de7a3e7bc9d8d0fd7d0f19bd5aa98704266a78b9b676826679"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.724132 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"edd763da-b7ea-4a61-846c-029eb54d9a08","Type":"ContainerDied","Data":"a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.762175 4909 generic.go:334] "Generic (PLEG): container finished" podID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerID="e41849efae31b0c8581f7c9f6ee28750c66a400db628e1800e2864b8a75f5b77" exitCode=143 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.762276 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fc4c8f8d8-g2ccp" event={"ID":"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd","Type":"ContainerDied","Data":"e41849efae31b0c8581f7c9f6ee28750c66a400db628e1800e2864b8a75f5b77"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.783654 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c218-account-create-kqmfw"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.801251 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c218-account-create-kqmfw"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.802773 4909 generic.go:334] "Generic (PLEG): container finished" podID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" exitCode=0 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.802852 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerDied","Data":"2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a"} Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.819793 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zg6rx"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.824413 4909 generic.go:334] "Generic (PLEG): container finished" podID="0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" containerID="5490bde309c9533492858121c3c3979518f2ae4d1909c148a36b275cb690a58a" exitCode=137 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.825167 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.840035 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2354-account-create-b9ncs"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.854089 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rwhgj"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.875647 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2354-account-create-b9ncs"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.884625 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rwhgj"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.898043 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance2354-account-delete-qnktx"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.910158 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.910419 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-log" containerID="cri-o://af985331eb5f612c15f9ade45f71e902d86e7e0cdc019c0a34c486877d6504c7" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.910842 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-api" containerID="cri-o://bd1ad1420e63a2b12ad289deaed447ee9e8f36e1917943a95d281b6236ae4f9e" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.932802 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.957755 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.958213 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-log" containerID="cri-o://7efb83d37c0b27280af33c61d86ae63402ad34c0d0269b1b068ec8e29e729792" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.959681 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-metadata" containerID="cri-o://930ad0d120558b695b7edf3c8655a0626d86ed2453cbe13f094dc27931f585b5" gracePeriod=30 Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974204 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8bqh\" (UniqueName: \"kubernetes.io/projected/a74aad93-58f0-4023-95e3-3f0e92558f84-kube-api-access-k8bqh\") pod \"a74aad93-58f0-4023-95e3-3f0e92558f84\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974336 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74aad93-58f0-4023-95e3-3f0e92558f84-scripts\") pod \"a74aad93-58f0-4023-95e3-3f0e92558f84\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974383 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run\") pod \"a74aad93-58f0-4023-95e3-3f0e92558f84\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974424 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-combined-ca-bundle\") pod \"a74aad93-58f0-4023-95e3-3f0e92558f84\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974446 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-log-ovn\") pod \"a74aad93-58f0-4023-95e3-3f0e92558f84\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974481 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-ovn-controller-tls-certs\") pod \"a74aad93-58f0-4023-95e3-3f0e92558f84\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974538 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run-ovn\") pod \"a74aad93-58f0-4023-95e3-3f0e92558f84\" (UID: \"a74aad93-58f0-4023-95e3-3f0e92558f84\") " Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.974779 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run" (OuterVolumeSpecName: "var-run") pod "a74aad93-58f0-4023-95e3-3f0e92558f84" (UID: "a74aad93-58f0-4023-95e3-3f0e92558f84"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.981490 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a74aad93-58f0-4023-95e3-3f0e92558f84" (UID: "a74aad93-58f0-4023-95e3-3f0e92558f84"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.983727 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74aad93-58f0-4023-95e3-3f0e92558f84-scripts" (OuterVolumeSpecName: "scripts") pod "a74aad93-58f0-4023-95e3-3f0e92558f84" (UID: "a74aad93-58f0-4023-95e3-3f0e92558f84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:20 crc kubenswrapper[4909]: I1126 07:22:20.984179 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a74aad93-58f0-4023-95e3-3f0e92558f84" (UID: "a74aad93-58f0-4023-95e3-3f0e92558f84"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.989477 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74aad93-58f0-4023-95e3-3f0e92558f84-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.989510 4909 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.989519 4909 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.989527 4909 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a74aad93-58f0-4023-95e3-3f0e92558f84-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: E1126 07:22:20.989629 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 26 07:22:21 crc kubenswrapper[4909]: E1126 07:22:20.989687 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data podName:e827f391-2fcb-4758-ae5e-deef3c712e53 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:22.989667682 +0000 UTC m=+1315.135878848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data") pod "rabbitmq-server-0" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53") : configmap "rabbitmq-config-data" not found Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994636 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="07c32dca92ef9af6a5b2f1da9964db33a8d49c3a4d846c0cb66461ab457f596f" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994663 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="deb5869801f78aa72238df2b9719a9337500c7d4fe3cef9fd57bfea3f27a9500" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994672 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="93d0e136e4522423ec6013c050a8ff1959c79f2b6857b7223d3792246312b6bd" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994679 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="e0d087da0faef2436ea0b5dc36389de6f9bcae11c0745372234e7e2e2515dc1e" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994685 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="12254f31c6a379da5fd4e34c45fd68057888fa099c912fe12dd9c1a881206bdf" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994692 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="d57d935982096fce0c90d166aa9755252570903363ed795caa7ea306a1c4a125" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994698 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="5d4ff632621d60ecaadd162fdb8816be897785eaad8d97513f60206f89fa1487" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994705 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="ef85ba50ad3703e23f7fcb4391c0f594c7dc9bc10c9b5ed2ff4ec5998223f89c" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994711 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="d349b9ce563e6e2048f46f3884eca2d8e3ba6436ecab095b55cfbdff47ed90e8" exitCode=0 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994766 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"07c32dca92ef9af6a5b2f1da9964db33a8d49c3a4d846c0cb66461ab457f596f"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994791 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"deb5869801f78aa72238df2b9719a9337500c7d4fe3cef9fd57bfea3f27a9500"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994800 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"93d0e136e4522423ec6013c050a8ff1959c79f2b6857b7223d3792246312b6bd"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994808 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"e0d087da0faef2436ea0b5dc36389de6f9bcae11c0745372234e7e2e2515dc1e"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994817 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"12254f31c6a379da5fd4e34c45fd68057888fa099c912fe12dd9c1a881206bdf"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994825 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"d57d935982096fce0c90d166aa9755252570903363ed795caa7ea306a1c4a125"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994833 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"5d4ff632621d60ecaadd162fdb8816be897785eaad8d97513f60206f89fa1487"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994840 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"ef85ba50ad3703e23f7fcb4391c0f594c7dc9bc10c9b5ed2ff4ec5998223f89c"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.994848 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"d349b9ce563e6e2048f46f3884eca2d8e3ba6436ecab095b55cfbdff47ed90e8"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.997252 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7llw7"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.999173 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_edd763da-b7ea-4a61-846c-029eb54d9a08/ovsdbserver-nb/0.log" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:20.999243 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.005582 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7llw7"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.008701 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74aad93-58f0-4023-95e3-3f0e92558f84-kube-api-access-k8bqh" (OuterVolumeSpecName: "kube-api-access-k8bqh") pod "a74aad93-58f0-4023-95e3-3f0e92558f84" (UID: "a74aad93-58f0-4023-95e3-3f0e92558f84"). InnerVolumeSpecName "kube-api-access-k8bqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.012819 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8b90-account-create-t78ss"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.016441 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.023189 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5ea1ebb8-6827-4f0b-a055-3b77e18755ac/ovsdbserver-sb/0.log" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.023233 4909 generic.go:334] "Generic (PLEG): container finished" podID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerID="6799943ef5fdd232348a851fd58224528db5e2724e6f622defc9357dcbfa014e" exitCode=143 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.023269 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5ea1ebb8-6827-4f0b-a055-3b77e18755ac","Type":"ContainerDied","Data":"6799943ef5fdd232348a851fd58224528db5e2724e6f622defc9357dcbfa014e"} Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.025623 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8b90-account-create-t78ss"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.047262 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6489c4db99-sc69l"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.047567 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6489c4db99-sc69l" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker-log" containerID="cri-o://60fee10fbe2a728536d6c3503ed6b4b03afa53a6b02b3aa79491273e4536b60d" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.047732 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6489c4db99-sc69l" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker" containerID="cri-o://8d96446636d1c32bd33935c854eafae7f92ad00599e940eb16e0ee0ad1233ddc" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.070484 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74f9bb65df-qpbtq"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.071052 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74f9bb65df-qpbtq" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-api" containerID="cri-o://0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.071175 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74f9bb65df-qpbtq" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-httpd" containerID="cri-o://76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.085657 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell050ac-account-delete-2nvln"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090405 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7t9l\" (UniqueName: \"kubernetes.io/projected/edd763da-b7ea-4a61-846c-029eb54d9a08-kube-api-access-b7t9l\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090566 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090658 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-scripts\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090688 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-metrics-certs-tls-certs\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090719 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-config\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090764 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdbserver-nb-tls-certs\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090844 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdb-rundir\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.090897 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-combined-ca-bundle\") pod \"edd763da-b7ea-4a61-846c-029eb54d9a08\" (UID: \"edd763da-b7ea-4a61-846c-029eb54d9a08\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.099350 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8bqh\" (UniqueName: \"kubernetes.io/projected/a74aad93-58f0-4023-95e3-3f0e92558f84-kube-api-access-k8bqh\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.103496 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-config" (OuterVolumeSpecName: "config") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.103923 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a74aad93-58f0-4023-95e3-3f0e92558f84" (UID: "a74aad93-58f0-4023-95e3-3f0e92558f84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.103959 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd763da-b7ea-4a61-846c-029eb54d9a08-kube-api-access-b7t9l" (OuterVolumeSpecName: "kube-api-access-b7t9l") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "kube-api-access-b7t9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.104084 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell18b90-account-delete-n7278"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.104270 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-scripts" (OuterVolumeSpecName: "scripts") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.104507 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.105446 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.108364 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5ea1ebb8-6827-4f0b-a055-3b77e18755ac/ovsdbserver-sb/0.log" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.108452 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.112823 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-50ac-account-create-c58tq"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.150538 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pj8dd"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.158091 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.162986 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pj8dd"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.169076 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8597f74f8-cp26v"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.169420 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8597f74f8-cp26v" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api-log" containerID="cri-o://b0232145ed4b3712ecaad8243ac7d77b6582f6fbaac7a7c0a418835faaca93d0" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.170073 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8597f74f8-cp26v" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api" containerID="cri-o://18c69a50eb20ebbdeb9c3c4cec5b96f232a261f134a17ae2bf389ddcaf0b29a6" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.200681 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjv9r\" (UniqueName: \"kubernetes.io/projected/acbd9367-38fb-4a1d-b818-c0bd4893c0de-kube-api-access-bjv9r\") pod \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.200751 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-config\") pod \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.200805 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-svc\") pod \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.200888 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-swift-storage-0\") pod \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201023 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-sb\") pod \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201072 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-nb\") pod \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\" (UID: \"acbd9367-38fb-4a1d-b818-c0bd4893c0de\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201469 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201482 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201491 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd763da-b7ea-4a61-846c-029eb54d9a08-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201499 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201511 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201521 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7t9l\" (UniqueName: \"kubernetes.io/projected/edd763da-b7ea-4a61-846c-029eb54d9a08-kube-api-access-b7t9l\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.201529 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.218046 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbd9367-38fb-4a1d-b818-c0bd4893c0de-kube-api-access-bjv9r" (OuterVolumeSpecName: "kube-api-access-bjv9r") pod "acbd9367-38fb-4a1d-b818-c0bd4893c0de" (UID: "acbd9367-38fb-4a1d-b818-c0bd4893c0de"). InnerVolumeSpecName "kube-api-access-bjv9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.235863 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "a74aad93-58f0-4023-95e3-3f0e92558f84" (UID: "a74aad93-58f0-4023-95e3-3f0e92558f84"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.251133 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-50ac-account-create-c58tq"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.252297 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.263019 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cf0d-account-create-64lww"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.265208 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nf9f5"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.273447 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-85d774bbbb-slpbz"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.274259 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener-log" containerID="cri-o://06b62a49cf46e07b4f7ce61be83af8cadb8ad382322ec95765c2a554c930637b" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.274405 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener" containerID="cri-o://448123d43785c504b22d8f7af78abefd489be91367d82b1fa2c04ab27f96653f" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.289569 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nf9f5"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.315587 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cf0d-account-create-64lww"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.315841 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" containerName="galera" containerID="cri-o://4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.316856 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-combined-ca-bundle\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.316999 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-config\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.320917 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdbserver-sb-tls-certs\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.321025 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-metrics-certs-tls-certs\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.321053 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pksq\" (UniqueName: \"kubernetes.io/projected/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-kube-api-access-5pksq\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.321081 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.321143 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdb-rundir\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.329446 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-config" (OuterVolumeSpecName: "config") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.329649 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.332994 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-scripts\") pod \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\" (UID: \"5ea1ebb8-6827-4f0b-a055-3b77e18755ac\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.336131 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-scripts" (OuterVolumeSpecName: "scripts") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.348975 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-kube-api-access-5pksq" (OuterVolumeSpecName: "kube-api-access-5pksq") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "kube-api-access-5pksq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.349378 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74aad93-58f0-4023-95e3-3f0e92558f84-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.349408 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.349418 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.349433 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pksq\" (UniqueName: \"kubernetes.io/projected/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-kube-api-access-5pksq\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.349442 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.349451 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.349459 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjv9r\" (UniqueName: \"kubernetes.io/projected/acbd9367-38fb-4a1d-b818-c0bd4893c0de-kube-api-access-bjv9r\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.354357 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutroncf0d-account-delete-8ssk2"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.366153 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.376886 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.379969 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "acbd9367-38fb-4a1d-b818-c0bd4893c0de" (UID: "acbd9367-38fb-4a1d-b818-c0bd4893c0de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.398867 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.402903 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.411256 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.411643 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.411935 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://fc4e1d11210ea94d5e20c39c83a322bf0f7dc51504c8b4db99b77d2610531017" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.412449 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "acbd9367-38fb-4a1d-b818-c0bd4893c0de" (UID: "acbd9367-38fb-4a1d-b818-c0bd4893c0de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.428392 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jlct9"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.429988 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-config" (OuterVolumeSpecName: "config") pod "acbd9367-38fb-4a1d-b818-c0bd4893c0de" (UID: "acbd9367-38fb-4a1d-b818-c0bd4893c0de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.436256 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acbd9367-38fb-4a1d-b818-c0bd4893c0de" (UID: "acbd9367-38fb-4a1d-b818-c0bd4893c0de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450246 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-combined-ca-bundle\") pod \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450282 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5fcz\" (UniqueName: \"kubernetes.io/projected/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-kube-api-access-f5fcz\") pod \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450393 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config-secret\") pod \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450459 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config\") pod \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\" (UID: \"0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7\") " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450740 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450759 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450768 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450777 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450785 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.450795 4909 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.451014 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.451200 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerName="rabbitmq" containerID="cri-o://c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe" gracePeriod=604800 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.459336 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-kube-api-access-f5fcz" (OuterVolumeSpecName: "kube-api-access-f5fcz") pod "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" (UID: "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7"). InnerVolumeSpecName "kube-api-access-f5fcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.471949 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.472242 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="2c6b5670-38ee-4d52-af67-1e187962d73d" containerName="nova-cell1-conductor-conductor" containerID="cri-o://2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.481139 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k8rzm"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.496397 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerName="rabbitmq" containerID="cri-o://a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32" gracePeriod=604800 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.499756 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.504044 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="0d2c4878-7f21-469c-b19b-c76f335e9e75" containerName="nova-cell0-conductor-conductor" containerID="cri-o://000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.516696 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jlct9"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.517079 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.524278 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k8rzm"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.526737 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "edd763da-b7ea-4a61-846c-029eb54d9a08" (UID: "edd763da-b7ea-4a61-846c-029eb54d9a08"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.552087 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5fcz\" (UniqueName: \"kubernetes.io/projected/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-kube-api-access-f5fcz\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.552125 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.552134 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd763da-b7ea-4a61-846c-029eb54d9a08-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.552142 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.558844 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.561398 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" (UID: "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.562916 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.563209 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="db82c9cc-8a13-4751-b93c-d5f9452dea67" containerName="nova-scheduler-scheduler" containerID="cri-o://0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" gracePeriod=30 Nov 26 07:22:21 crc kubenswrapper[4909]: E1126 07:22:21.565720 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:21 crc kubenswrapper[4909]: E1126 07:22:21.576427 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.583307 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" (UID: "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: E1126 07:22:21.584733 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:21 crc kubenswrapper[4909]: E1126 07:22:21.584808 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="2c6b5670-38ee-4d52-af67-1e187962d73d" containerName="nova-cell1-conductor-conductor" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.595037 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "acbd9367-38fb-4a1d-b818-c0bd4893c0de" (UID: "acbd9367-38fb-4a1d-b818-c0bd4893c0de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.657123 4909 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.657161 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.657171 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.657180 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acbd9367-38fb-4a1d-b818-c0bd4893c0de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.670799 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "5ea1ebb8-6827-4f0b-a055-3b77e18755ac" (UID: "5ea1ebb8-6827-4f0b-a055-3b77e18755ac"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.693723 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" (UID: "0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.762883 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ea1ebb8-6827-4f0b-a055-3b77e18755ac-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.762954 4909 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.856822 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutroncf0d-account-delete-8ssk2"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.877435 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicana478-account-delete-9mdlb"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.886726 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell18b90-account-delete-n7278"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.894491 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi1964-account-delete-vmj4p"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.900378 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placemente922-account-delete-ccdmj"] Nov 26 07:22:21 crc kubenswrapper[4909]: W1126 07:22:21.913468 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod944eaf5b_6552_409a_a932_7fceaf182ff7.slice/crio-bae3d62868c2415636e58387f377780fdc2ca6213b52bb43487badf27d8f523c WatchSource:0}: Error finding container bae3d62868c2415636e58387f377780fdc2ca6213b52bb43487badf27d8f523c: Status 404 returned error can't find the container with id bae3d62868c2415636e58387f377780fdc2ca6213b52bb43487badf27d8f523c Nov 26 07:22:21 crc kubenswrapper[4909]: W1126 07:22:21.926018 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89513daa_9a0c_4888_9a33_0ba9c007da26.slice/crio-470d455335109ddcd9477ac526c5c9a0c54c75aecb96b4b4a66e0e0ecffca745 WatchSource:0}: Error finding container 470d455335109ddcd9477ac526c5c9a0c54c75aecb96b4b4a66e0e0ecffca745: Status 404 returned error can't find the container with id 470d455335109ddcd9477ac526c5c9a0c54c75aecb96b4b4a66e0e0ecffca745 Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.945662 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance2354-account-delete-qnktx"] Nov 26 07:22:21 crc kubenswrapper[4909]: I1126 07:22:21.964069 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell050ac-account-delete-2nvln"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.043559 4909 generic.go:334] "Generic (PLEG): container finished" podID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerID="76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876" exitCode=0 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.043679 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74f9bb65df-qpbtq" event={"ID":"978782ca-c440-4bb1-9516-30115aa4a0b2","Type":"ContainerDied","Data":"76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.046967 4909 generic.go:334] "Generic (PLEG): container finished" podID="d79c0347-3494-4451-83b3-9919dd346f19" containerID="60fee10fbe2a728536d6c3503ed6b4b03afa53a6b02b3aa79491273e4536b60d" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.047016 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6489c4db99-sc69l" event={"ID":"d79c0347-3494-4451-83b3-9919dd346f19","Type":"ContainerDied","Data":"60fee10fbe2a728536d6c3503ed6b4b03afa53a6b02b3aa79491273e4536b60d"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.055853 4909 scope.go:117] "RemoveContainer" containerID="5490bde309c9533492858121c3c3979518f2ae4d1909c148a36b275cb690a58a" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.055998 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099382 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="96997ae8444f96d36126a818d42e9ce0882a0ec678fa1686cadf36da925626d7" exitCode=0 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099417 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="ae51b3e0f8704221eb8fa99538d9b20411e525c3d485412522af25ca33ee293d" exitCode=0 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099425 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="c76f25e43175f3d693010c16bd1b421da9f361eea4704ff1766122084490d5d8" exitCode=0 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099432 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="a449d7cd0e0553480c704885c8e18a406ff461623be069faf59ed385c2a89148" exitCode=0 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099438 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="dced4a3ee055a4cc6d79d52944605e70abd5ed1457b4c96ba7b9b9ae67562306" exitCode=0 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099481 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"96997ae8444f96d36126a818d42e9ce0882a0ec678fa1686cadf36da925626d7"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099508 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"ae51b3e0f8704221eb8fa99538d9b20411e525c3d485412522af25ca33ee293d"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099518 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"c76f25e43175f3d693010c16bd1b421da9f361eea4704ff1766122084490d5d8"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099528 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"a449d7cd0e0553480c704885c8e18a406ff461623be069faf59ed385c2a89148"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.099536 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"dced4a3ee055a4cc6d79d52944605e70abd5ed1457b4c96ba7b9b9ae67562306"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.118643 4909 generic.go:334] "Generic (PLEG): container finished" podID="7382debb-3dc4-4849-9109-5d415c6a196f" containerID="06b62a49cf46e07b4f7ce61be83af8cadb8ad382322ec95765c2a554c930637b" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.118866 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" event={"ID":"7382debb-3dc4-4849-9109-5d415c6a196f","Type":"ContainerDied","Data":"06b62a49cf46e07b4f7ce61be83af8cadb8ad382322ec95765c2a554c930637b"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.125271 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6791905e-4b74-417e-bc1b-0747eac5878e","Type":"ContainerDied","Data":"6e9c74d44de9181ab6e32d80d4b92f4cc0c240f37302b35ea2fea0bdf1f79435"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.125236 4909 generic.go:334] "Generic (PLEG): container finished" podID="6791905e-4b74-417e-bc1b-0747eac5878e" containerID="6e9c74d44de9181ab6e32d80d4b92f4cc0c240f37302b35ea2fea0bdf1f79435" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.133409 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_edd763da-b7ea-4a61-846c-029eb54d9a08/ovsdbserver-nb/0.log" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.133518 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"edd763da-b7ea-4a61-846c-029eb54d9a08","Type":"ContainerDied","Data":"29db3239abfc8061ee8646dbd267af74c1583430d6ca7871592f839012e0448e"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.133637 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.151882 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sxvh6" event={"ID":"a74aad93-58f0-4023-95e3-3f0e92558f84","Type":"ContainerDied","Data":"d8b74a9a473b27b0fd6dd61141c23e58b8373c6f5d5b09d5a70fd09f16f457cf"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.152016 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sxvh6" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.162908 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutroncf0d-account-delete-8ssk2" event={"ID":"944eaf5b-6552-409a-a932-7fceaf182ff7","Type":"ContainerStarted","Data":"bae3d62868c2415636e58387f377780fdc2ca6213b52bb43487badf27d8f523c"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.167863 4909 generic.go:334] "Generic (PLEG): container finished" podID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerID="7efb83d37c0b27280af33c61d86ae63402ad34c0d0269b1b068ec8e29e729792" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.167934 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62","Type":"ContainerDied","Data":"7efb83d37c0b27280af33c61d86ae63402ad34c0d0269b1b068ec8e29e729792"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.185716 4909 generic.go:334] "Generic (PLEG): container finished" podID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerID="ffdcc38e7deac196a6a6dc47ac259ef1b3c1eaff9265239fbcdfb5425c3fe186" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.185833 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07095ffe-adde-4857-93db-5a02f0adf9e6","Type":"ContainerDied","Data":"ffdcc38e7deac196a6a6dc47ac259ef1b3c1eaff9265239fbcdfb5425c3fe186"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.194925 4909 generic.go:334] "Generic (PLEG): container finished" podID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerID="af985331eb5f612c15f9ade45f71e902d86e7e0cdc019c0a34c486877d6504c7" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.194999 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88","Type":"ContainerDied","Data":"af985331eb5f612c15f9ade45f71e902d86e7e0cdc019c0a34c486877d6504c7"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.202439 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placemente922-account-delete-ccdmj" event={"ID":"bc036cf2-920c-4497-bec8-cbf0d293c33a","Type":"ContainerStarted","Data":"bd0bd9845229a3f8839101bf9448f6ec9c9d468e93752d55ca93e1c3f4b4b087"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.219424 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5ea1ebb8-6827-4f0b-a055-3b77e18755ac/ovsdbserver-sb/0.log" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.219542 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5ea1ebb8-6827-4f0b-a055-3b77e18755ac","Type":"ContainerDied","Data":"940a369ef08c6a36d1303a67c572a6eea4aef5595c8f23221da0147127ee47b8"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.219806 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.228262 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" event={"ID":"acbd9367-38fb-4a1d-b818-c0bd4893c0de","Type":"ContainerDied","Data":"52e6c56d17f7a05e7f9cf285eedb00e1845e4aa1d7dd78c81f4bf2f9dd2f6204"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.228470 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dtqxp" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.245212 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicana478-account-delete-9mdlb" event={"ID":"89513daa-9a0c-4888-9a33-0ba9c007da26","Type":"ContainerStarted","Data":"470d455335109ddcd9477ac526c5c9a0c54c75aecb96b4b4a66e0e0ecffca745"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.261493 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell050ac-account-delete-2nvln" event={"ID":"793c680c-7448-478e-bbf4-bca888e7e4c9","Type":"ContainerStarted","Data":"afa67d7abffb104a9a0a0133de943389f52ee43d62d6e143ae42e5fe8b547f15"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.264572 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2354-account-delete-qnktx" event={"ID":"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2","Type":"ContainerStarted","Data":"ed38b9c53894c371b6cc4eccd3489b009fe1abcd1cd4e957d7e0ddbc69c2aa7b"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.271159 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell18b90-account-delete-n7278" event={"ID":"93d42f19-cfd6-4b06-aaf2-8febb4bd3945","Type":"ContainerStarted","Data":"6d86292d1bea790cc218aff640cee6dd38bab6373add5ae7dac4a5963a0e669f"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.289099 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi1964-account-delete-vmj4p" event={"ID":"c468acce-9341-4eff-94c9-f38b74077fdf","Type":"ContainerStarted","Data":"e3282db93757a8ba36a77ee24813824870d5393e3122ad327f1d798a44729554"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.297394 4909 generic.go:334] "Generic (PLEG): container finished" podID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerID="b0232145ed4b3712ecaad8243ac7d77b6582f6fbaac7a7c0a418835faaca93d0" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.297501 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8597f74f8-cp26v" event={"ID":"1746d8cc-9394-471e-a1c3-5471e65dfc73","Type":"ContainerDied","Data":"b0232145ed4b3712ecaad8243ac7d77b6582f6fbaac7a7c0a418835faaca93d0"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.306439 4909 generic.go:334] "Generic (PLEG): container finished" podID="0fdde234-058b-4e39-a647-b87669d3fda5" containerID="346c5a3945f1eaf91c1fcf6d0365d419473a3372fe0402c424978821010165bc" exitCode=143 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.306551 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fdde234-058b-4e39-a647-b87669d3fda5","Type":"ContainerDied","Data":"346c5a3945f1eaf91c1fcf6d0365d419473a3372fe0402c424978821010165bc"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.312314 4909 generic.go:334] "Generic (PLEG): container finished" podID="edba305d-f8e6-4ab0-ae68-30b668037813" containerID="b589d87e51374dd79f69c4819dca7d38374f8adb25ebf560946dbab0a7dc7461" exitCode=0 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.312369 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edba305d-f8e6-4ab0-ae68-30b668037813","Type":"ContainerDied","Data":"b589d87e51374dd79f69c4819dca7d38374f8adb25ebf560946dbab0a7dc7461"} Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.328079 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-567d49d699-wbzsj"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.328474 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-567d49d699-wbzsj" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-httpd" containerID="cri-o://198a28e1346d287b3f7810156c663f6c5e14b09f746144f9f4aee54002dbd0a2" gracePeriod=30 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.328760 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-567d49d699-wbzsj" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-server" containerID="cri-o://880e17b02d4c9dab1267c346c106365cd7c194623167e4792837a0d418d59a8f" gracePeriod=30 Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.337307 4909 scope.go:117] "RemoveContainer" containerID="4daadf8992d503de7a3e7bc9d8d0fd7d0f19bd5aa98704266a78b9b676826679" Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.386764 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.386851 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data podName:37fbb13e-7e2e-451d-af0e-a648c4cde4c2 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:26.386828091 +0000 UTC m=+1318.533039257 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data") pod "rabbitmq-cell1-server-0" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2") : configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.387276 4909 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.387315 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:26.387305865 +0000 UTC m=+1318.533517031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-scripts" not found Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.387367 4909 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.387391 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data podName:07095ffe-adde-4857-93db-5a02f0adf9e6 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:26.387383617 +0000 UTC m=+1318.533594783 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data") pod "cinder-api-0" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6") : secret "cinder-config-data" not found Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.390037 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dtqxp"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.398828 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dtqxp"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.406967 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sxvh6"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.425540 4909 scope.go:117] "RemoveContainer" containerID="a08ac97fdf299b444c078205636c30387ea798646fc3f60cc7385009b74ea780" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.431002 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sxvh6"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.438225 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.453142 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.454402 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.462421 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.462496 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="db82c9cc-8a13-4751-b93c-d5f9452dea67" containerName="nova-scheduler-scheduler" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.467640 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.535523 4909 scope.go:117] "RemoveContainer" containerID="93db5282a5d3a8a5b42c10fd0761792f7b599e1119af2d251fa7fca907026199" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.570038 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7" path="/var/lib/kubelet/pods/0a1b1dbf-0afa-47d3-b27b-11adec8e2aa7/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.570817 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aedc9a8-307e-4ea5-bc63-b6c661275773" path="/var/lib/kubelet/pods/0aedc9a8-307e-4ea5-bc63-b6c661275773/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.571364 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="126f2c5e-9f3f-444c-854c-b72d3c16c695" path="/var/lib/kubelet/pods/126f2c5e-9f3f-444c-854c-b72d3c16c695/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.576625 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" path="/var/lib/kubelet/pods/5ea1ebb8-6827-4f0b-a055-3b77e18755ac/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.577395 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602f4606-6ad9-4358-935e-b4dcc0282e50" path="/var/lib/kubelet/pods/602f4606-6ad9-4358-935e-b4dcc0282e50/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.578988 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7219c3e0-3c80-4c3a-b0c1-1918cb3980ac" path="/var/lib/kubelet/pods/7219c3e0-3c80-4c3a-b0c1-1918cb3980ac/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.579827 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ffd03c-7228-474b-830e-01f0be8998d5" path="/var/lib/kubelet/pods/74ffd03c-7228-474b-830e-01f0be8998d5/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.581063 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f5c169-0392-4dbe-91a4-856e444ce6a9" path="/var/lib/kubelet/pods/75f5c169-0392-4dbe-91a4-856e444ce6a9/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.581512 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0e3b03-58b3-4ece-be71-303a24548901" path="/var/lib/kubelet/pods/9d0e3b03-58b3-4ece-be71-303a24548901/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.590776 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" path="/var/lib/kubelet/pods/a74aad93-58f0-4023-95e3-3f0e92558f84/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.592994 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" path="/var/lib/kubelet/pods/acbd9367-38fb-4a1d-b818-c0bd4893c0de/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.593665 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b438de63-b387-458f-95d3-16d70d981ba5" path="/var/lib/kubelet/pods/b438de63-b387-458f-95d3-16d70d981ba5/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.594905 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5bd7265-3f87-4e7b-9dc1-29e2e99c8771" path="/var/lib/kubelet/pods/c5bd7265-3f87-4e7b-9dc1-29e2e99c8771/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.595945 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f47300-df0d-451f-bc80-feec784391ec" path="/var/lib/kubelet/pods/d5f47300-df0d-451f-bc80-feec784391ec/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.596713 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7876d93-7bc5-407c-b554-da69dbfa93f0" path="/var/lib/kubelet/pods/e7876d93-7bc5-407c-b554-da69dbfa93f0/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.598774 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b3c67e-9da6-4515-a39d-8b653cfb6b56" path="/var/lib/kubelet/pods/f2b3c67e-9da6-4515-a39d-8b653cfb6b56/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.599978 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdcdfe5a-4464-43fe-94de-09f58a3f7a46" path="/var/lib/kubelet/pods/fdcdfe5a-4464-43fe-94de-09f58a3f7a46/volumes" Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.600653 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 07:22:22 crc kubenswrapper[4909]: I1126 07:22:22.600695 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.948499 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497 is running failed: container process not found" containerID="4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.949135 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497 is running failed: container process not found" containerID="4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.949521 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497 is running failed: container process not found" containerID="4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 26 07:22:22 crc kubenswrapper[4909]: E1126 07:22:22.949566 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" containerName="galera" Nov 26 07:22:23 crc kubenswrapper[4909]: E1126 07:22:23.069611 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 26 07:22:23 crc kubenswrapper[4909]: E1126 07:22:23.069967 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data podName:e827f391-2fcb-4758-ae5e-deef3c712e53 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:27.069949443 +0000 UTC m=+1319.216160609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data") pod "rabbitmq-server-0" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53") : configmap "rabbitmq-config-data" not found Nov 26 07:22:23 crc kubenswrapper[4909]: E1126 07:22:23.186308 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:23 crc kubenswrapper[4909]: E1126 07:22:23.204290 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:23 crc kubenswrapper[4909]: E1126 07:22:23.209984 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:23 crc kubenswrapper[4909]: E1126 07:22:23.210042 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0d2c4878-7f21-469c-b19b-c76f335e9e75" containerName="nova-cell0-conductor-conductor" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.332667 4909 generic.go:334] "Generic (PLEG): container finished" podID="24fe368f-39d5-438d-baf0-4e66700131f4" containerID="4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.332743 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24fe368f-39d5-438d-baf0-4e66700131f4","Type":"ContainerDied","Data":"4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.332777 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24fe368f-39d5-438d-baf0-4e66700131f4","Type":"ContainerDied","Data":"74ac97601a0ecf73149113f3e5fb334717b79b4d933a2e6b3b45d2d115fe8e32"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.332795 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ac97601a0ecf73149113f3e5fb334717b79b4d933a2e6b3b45d2d115fe8e32" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.340475 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zk568"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.347803 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zk568"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.349799 4909 generic.go:334] "Generic (PLEG): container finished" podID="793c680c-7448-478e-bbf4-bca888e7e4c9" containerID="58fff6394c3033d5aea46706541b169e8d37fa6d62b5f97b12a15d03cc97891e" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.349877 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell050ac-account-delete-2nvln" event={"ID":"793c680c-7448-478e-bbf4-bca888e7e4c9","Type":"ContainerDied","Data":"58fff6394c3033d5aea46706541b169e8d37fa6d62b5f97b12a15d03cc97891e"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.360054 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e922-account-create-dz958"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.367094 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placemente922-account-delete-ccdmj"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.375950 4909 generic.go:334] "Generic (PLEG): container finished" podID="ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" containerID="fc4e1d11210ea94d5e20c39c83a322bf0f7dc51504c8b4db99b77d2610531017" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.376121 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4","Type":"ContainerDied","Data":"fc4e1d11210ea94d5e20c39c83a322bf0f7dc51504c8b4db99b77d2610531017"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.376154 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4","Type":"ContainerDied","Data":"3e1fa8da909e5f2a3e31f2d41cf514d44119015cc6d1f43c8ce67e358c6b4419"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.376168 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e1fa8da909e5f2a3e31f2d41cf514d44119015cc6d1f43c8ce67e358c6b4419" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.390677 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e922-account-create-dz958"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.408320 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-v68wp"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.411863 4909 generic.go:334] "Generic (PLEG): container finished" podID="944eaf5b-6552-409a-a932-7fceaf182ff7" containerID="c85db5adf812e59df5035d902468336440f55176401465a21bde6db65d9ef632" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.411994 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutroncf0d-account-delete-8ssk2" event={"ID":"944eaf5b-6552-409a-a932-7fceaf182ff7","Type":"ContainerDied","Data":"c85db5adf812e59df5035d902468336440f55176401465a21bde6db65d9ef632"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.413766 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-v68wp"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.420265 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a478-account-create-75mwn"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.424804 4909 scope.go:117] "RemoveContainer" containerID="bbec5715c551f88ea231efe57c9124f91b9b77cfb5ebea4c9e465ffb097ed605" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.425167 4909 generic.go:334] "Generic (PLEG): container finished" podID="c468acce-9341-4eff-94c9-f38b74077fdf" containerID="04913ab9e915ad52de6004fca29426d07d456a81c47c21c9ae2f92f19e8bde70" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.425224 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi1964-account-delete-vmj4p" event={"ID":"c468acce-9341-4eff-94c9-f38b74077fdf","Type":"ContainerDied","Data":"04913ab9e915ad52de6004fca29426d07d456a81c47c21c9ae2f92f19e8bde70"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.436270 4909 generic.go:334] "Generic (PLEG): container finished" podID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerID="880e17b02d4c9dab1267c346c106365cd7c194623167e4792837a0d418d59a8f" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.436320 4909 generic.go:334] "Generic (PLEG): container finished" podID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerID="198a28e1346d287b3f7810156c663f6c5e14b09f746144f9f4aee54002dbd0a2" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.436340 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-567d49d699-wbzsj" event={"ID":"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32","Type":"ContainerDied","Data":"880e17b02d4c9dab1267c346c106365cd7c194623167e4792837a0d418d59a8f"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.436425 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-567d49d699-wbzsj" event={"ID":"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32","Type":"ContainerDied","Data":"198a28e1346d287b3f7810156c663f6c5e14b09f746144f9f4aee54002dbd0a2"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.440952 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a478-account-create-75mwn"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.442665 4909 generic.go:334] "Generic (PLEG): container finished" podID="ba28de5d-9cc0-475b-9eb5-6fce621ca4e2" containerID="5f08c808496e869b52d1277ac27949d8fe1c649888be137aecb9eaeeed0c8968" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.442717 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2354-account-delete-qnktx" event={"ID":"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2","Type":"ContainerDied","Data":"5f08c808496e869b52d1277ac27949d8fe1c649888be137aecb9eaeeed0c8968"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.465460 4909 generic.go:334] "Generic (PLEG): container finished" podID="7382debb-3dc4-4849-9109-5d415c6a196f" containerID="448123d43785c504b22d8f7af78abefd489be91367d82b1fa2c04ab27f96653f" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.465526 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" event={"ID":"7382debb-3dc4-4849-9109-5d415c6a196f","Type":"ContainerDied","Data":"448123d43785c504b22d8f7af78abefd489be91367d82b1fa2c04ab27f96653f"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.466521 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.473244 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicana478-account-delete-9mdlb"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.473373 4909 generic.go:334] "Generic (PLEG): container finished" podID="89513daa-9a0c-4888-9a33-0ba9c007da26" containerID="ccfd3d7b7cf51112b6da0749f82a4e2c74a5a1cb50d253b689f9325bce61d9fd" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.473407 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicana478-account-delete-9mdlb" event={"ID":"89513daa-9a0c-4888-9a33-0ba9c007da26","Type":"ContainerDied","Data":"ccfd3d7b7cf51112b6da0749f82a4e2c74a5a1cb50d253b689f9325bce61d9fd"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.474526 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.475922 4909 generic.go:334] "Generic (PLEG): container finished" podID="d79c0347-3494-4451-83b3-9919dd346f19" containerID="8d96446636d1c32bd33935c854eafae7f92ad00599e940eb16e0ee0ad1233ddc" exitCode=0 Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.475947 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6489c4db99-sc69l" event={"ID":"d79c0347-3494-4451-83b3-9919dd346f19","Type":"ContainerDied","Data":"8d96446636d1c32bd33935c854eafae7f92ad00599e940eb16e0ee0ad1233ddc"} Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.483397 4909 scope.go:117] "RemoveContainer" containerID="6799943ef5fdd232348a851fd58224528db5e2724e6f622defc9357dcbfa014e" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.499677 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q54r\" (UniqueName: \"kubernetes.io/projected/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-kube-api-access-5q54r\") pod \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.499774 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs\") pod \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.499804 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-combined-ca-bundle\") pod \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.499860 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-config-data\") pod \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.499956 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-nova-novncproxy-tls-certs\") pod \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.589995 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-kube-api-access-5q54r" (OuterVolumeSpecName: "kube-api-access-5q54r") pod "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" (UID: "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4"). InnerVolumeSpecName "kube-api-access-5q54r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608210 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-operator-scripts\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608364 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-generated\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608429 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-galera-tls-certs\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608483 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vfdd\" (UniqueName: \"kubernetes.io/projected/24fe368f-39d5-438d-baf0-4e66700131f4-kube-api-access-4vfdd\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608512 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-default\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608533 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-combined-ca-bundle\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608570 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-secrets\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608600 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.608634 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-kolla-config\") pod \"24fe368f-39d5-438d-baf0-4e66700131f4\" (UID: \"24fe368f-39d5-438d-baf0-4e66700131f4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.609071 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q54r\" (UniqueName: \"kubernetes.io/projected/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-kube-api-access-5q54r\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.610902 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.613942 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.614350 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.614571 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.618311 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-secrets" (OuterVolumeSpecName: "secrets") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.620006 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fe368f-39d5-438d-baf0-4e66700131f4-kube-api-access-4vfdd" (OuterVolumeSpecName: "kube-api-access-4vfdd") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "kube-api-access-4vfdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.649775 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "mysql-db") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.711791 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.711823 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vfdd\" (UniqueName: \"kubernetes.io/projected/24fe368f-39d5-438d-baf0-4e66700131f4-kube-api-access-4vfdd\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.711832 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.711840 4909 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.711860 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.711869 4909 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.711879 4909 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24fe368f-39d5-438d-baf0-4e66700131f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.772512 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" (UID: "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.816791 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.821838 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-v2w7l"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.826479 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-v2w7l"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.862142 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-config-data" (OuterVolumeSpecName: "config-data") pod "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" (UID: "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.873333 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.884080 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" (UID: "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.888730 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1964-account-create-gb24n"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.891307 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.898379 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi1964-account-delete-vmj4p"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.900432 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24fe368f-39d5-438d-baf0-4e66700131f4" (UID: "24fe368f-39d5-438d-baf0-4e66700131f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.912470 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1964-account-create-gb24n"] Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.920121 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" (UID: "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.920899 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs\") pod \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\" (UID: \"ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4\") " Nov 26 07:22:23 crc kubenswrapper[4909]: W1126 07:22:23.921035 4909 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4/volumes/kubernetes.io~secret/vencrypt-tls-certs Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.921055 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" (UID: "ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.921271 4909 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.921288 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.921297 4909 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.921307 4909 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.921316 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fe368f-39d5-438d-baf0-4e66700131f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:23 crc kubenswrapper[4909]: I1126 07:22:23.921324 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.097371 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.098044 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-central-agent" containerID="cri-o://2a498b57189a43f970e29d0e8040bbe8756423b8463ab0d366df0dad3c6b6fa0" gracePeriod=30 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.098428 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="proxy-httpd" containerID="cri-o://d849337821d8217076eb9a9d55645f97c144b965bd6ef5def3a986ec27b0c502" gracePeriod=30 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.098452 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-notification-agent" containerID="cri-o://51f26cf80d8da853fb9da8dc0fafd164d9ce6124c41fe4293c3704ab70a1c633" gracePeriod=30 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.098544 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="sg-core" containerID="cri-o://043f2d799dc68a009facf1b7538ebe9df2d186af807ec3bf0beb95026499894c" gracePeriod=30 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.142674 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.143025 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0b222993-a4da-4936-807a-9e99c637bc27" containerName="kube-state-metrics" containerID="cri-o://85ed4d9b155d719cd9be008a478f8c30dd565960a367abc54ca0b3cdd6d157e2" gracePeriod=30 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.206495 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:59270->10.217.0.204:8775: read: connection reset by peer" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.206899 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:59280->10.217.0.204:8775: read: connection reset by peer" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.433234 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.442563 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" containerName="memcached" containerID="cri-o://c6f4d8f0f4e61dc21f3c450f4b1e6411451bb293632a6aacfce4bc571716303d" gracePeriod=30 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.556914 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.581243 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031a6940-0a2c-4be2-9601-061ebeac0989" path="/var/lib/kubelet/pods/031a6940-0a2c-4be2-9601-061ebeac0989/volumes" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.592626 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18132a30-759b-445e-887c-84acbf813072" path="/var/lib/kubelet/pods/18132a30-759b-445e-887c-84acbf813072/volumes" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.593159 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c61b861-dea8-48b1-a0f3-aeec4d1cb973" path="/var/lib/kubelet/pods/2c61b861-dea8-48b1-a0f3-aeec4d1cb973/volumes" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.593668 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4688369a-740e-448c-b5ef-72243cc7597a" path="/var/lib/kubelet/pods/4688369a-740e-448c-b5ef-72243cc7597a/volumes" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.597542 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b71a783-4ce8-4d76-8023-65f4bc62bb61" path="/var/lib/kubelet/pods/8b71a783-4ce8-4d76-8023-65f4bc62bb61/volumes" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.598115 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e888ebe-9e2c-4747-8ecc-e03877820810" path="/var/lib/kubelet/pods/8e888ebe-9e2c-4747-8ecc-e03877820810/volumes" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.598820 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" path="/var/lib/kubelet/pods/edd763da-b7ea-4a61-846c-029eb54d9a08/volumes" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.599830 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-k9bk8"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.599924 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-k9bk8"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.601548 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.609206 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.617846 4909 generic.go:334] "Generic (PLEG): container finished" podID="0fdde234-058b-4e39-a647-b87669d3fda5" containerID="c6050544ee612b28a902864fcf0420d3aca003e37b8bf69449abc96dd1260ebc" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.617910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fdde234-058b-4e39-a647-b87669d3fda5","Type":"ContainerDied","Data":"c6050544ee612b28a902864fcf0420d3aca003e37b8bf69449abc96dd1260ebc"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.626584 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8646886cd-cj5pc"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.626864 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-8646886cd-cj5pc" podUID="b0ef7a35-86f9-4afc-9529-ff707ba448a9" containerName="keystone-api" containerID="cri-o://c7815d86ff25c599f9e26f760ca58bbfc89cea51769f7eddc87a7472792ccca9" gracePeriod=30 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.687955 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutroncf0d-account-delete-8ssk2" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.688320 4909 scope.go:117] "RemoveContainer" containerID="b3791de9e2272db503f056aa00b550a13134003b62ae8874f22a308095ddd4a6" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.692357 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-internal-tls-certs\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.692414 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-combined-ca-bundle\") pod \"7382debb-3dc4-4849-9109-5d415c6a196f\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.692442 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-run-httpd\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.692467 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-combined-ca-bundle\") pod \"d79c0347-3494-4451-83b3-9919dd346f19\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.692551 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7382debb-3dc4-4849-9109-5d415c6a196f-logs\") pod \"7382debb-3dc4-4849-9109-5d415c6a196f\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.694200 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.695782 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7382debb-3dc4-4849-9109-5d415c6a196f-logs" (OuterVolumeSpecName: "logs") pod "7382debb-3dc4-4849-9109-5d415c6a196f" (UID: "7382debb-3dc4-4849-9109-5d415c6a196f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.698120 4909 generic.go:334] "Generic (PLEG): container finished" podID="93d42f19-cfd6-4b06-aaf2-8febb4bd3945" containerID="c4a2a35459e0211d7d702615427fb3cab25f73244a809f8e2c702c0ed29c48f9" exitCode=1 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.698316 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell18b90-account-delete-n7278" event={"ID":"93d42f19-cfd6-4b06-aaf2-8febb4bd3945","Type":"ContainerDied","Data":"c4a2a35459e0211d7d702615427fb3cab25f73244a809f8e2c702c0ed29c48f9"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.700731 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-public-tls-certs\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.700783 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9qbb\" (UniqueName: \"kubernetes.io/projected/d79c0347-3494-4451-83b3-9919dd346f19-kube-api-access-l9qbb\") pod \"d79c0347-3494-4451-83b3-9919dd346f19\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.700808 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data\") pod \"d79c0347-3494-4451-83b3-9919dd346f19\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.700848 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-config-data\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.700879 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data-custom\") pod \"d79c0347-3494-4451-83b3-9919dd346f19\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.700915 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-combined-ca-bundle\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.700998 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data-custom\") pod \"7382debb-3dc4-4849-9109-5d415c6a196f\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.701054 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-log-httpd\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.701088 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data\") pod \"7382debb-3dc4-4849-9109-5d415c6a196f\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.701118 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn456\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-kube-api-access-kn456\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.701163 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c0347-3494-4451-83b3-9919dd346f19-logs\") pod \"d79c0347-3494-4451-83b3-9919dd346f19\" (UID: \"d79c0347-3494-4451-83b3-9919dd346f19\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.701195 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv7ck\" (UniqueName: \"kubernetes.io/projected/7382debb-3dc4-4849-9109-5d415c6a196f-kube-api-access-wv7ck\") pod \"7382debb-3dc4-4849-9109-5d415c6a196f\" (UID: \"7382debb-3dc4-4849-9109-5d415c6a196f\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.701229 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-etc-swift\") pod \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\" (UID: \"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.702112 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.702129 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7382debb-3dc4-4849-9109-5d415c6a196f-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.706321 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance2354-account-delete-qnktx" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.713654 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2354-account-delete-qnktx" event={"ID":"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2","Type":"ContainerDied","Data":"ed38b9c53894c371b6cc4eccd3489b009fe1abcd1cd4e957d7e0ddbc69c2aa7b"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.719153 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.719627 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.724750 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-kube-api-access-kn456" (OuterVolumeSpecName: "kube-api-access-kn456") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "kube-api-access-kn456". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.702700 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d79c0347-3494-4451-83b3-9919dd346f19-logs" (OuterVolumeSpecName: "logs") pod "d79c0347-3494-4451-83b3-9919dd346f19" (UID: "d79c0347-3494-4451-83b3-9919dd346f19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.731335 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell050ac-account-delete-2nvln" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.733915 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7382debb-3dc4-4849-9109-5d415c6a196f" (UID: "7382debb-3dc4-4849-9109-5d415c6a196f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.735824 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutroncf0d-account-delete-8ssk2" event={"ID":"944eaf5b-6552-409a-a932-7fceaf182ff7","Type":"ContainerDied","Data":"bae3d62868c2415636e58387f377780fdc2ca6213b52bb43487badf27d8f523c"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.735851 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutroncf0d-account-delete-8ssk2" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.758574 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qg72z"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.760198 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d79c0347-3494-4451-83b3-9919dd346f19" (UID: "d79c0347-3494-4451-83b3-9919dd346f19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.783642 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qg72z"] Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.788342 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.797323 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone2db1-account-delete-24jqj"] Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.798295 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerName="init" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.798317 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerName="init" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.798336 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerName="ovn-controller" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.798343 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerName="ovn-controller" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.798350 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="ovsdbserver-sb" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.798356 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="ovsdbserver-sb" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.798371 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944eaf5b-6552-409a-a932-7fceaf182ff7" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.798377 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="944eaf5b-6552-409a-a932-7fceaf182ff7" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.798392 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerName="dnsmasq-dns" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.798398 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerName="dnsmasq-dns" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.798424 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" containerName="galera" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.798430 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" containerName="galera" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.798445 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-server" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.799985 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79c0347-3494-4451-83b3-9919dd346f19-kube-api-access-l9qbb" (OuterVolumeSpecName: "kube-api-access-l9qbb") pod "d79c0347-3494-4451-83b3-9919dd346f19" (UID: "d79c0347-3494-4451-83b3-9919dd346f19"). InnerVolumeSpecName "kube-api-access-l9qbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.806095 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-server" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.806226 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.806279 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.806330 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ffd03c-7228-474b-830e-01f0be8998d5" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.806678 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ffd03c-7228-474b-830e-01f0be8998d5" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.806775 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-httpd" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.806830 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-httpd" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.806942 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba28de5d-9cc0-475b-9eb5-6fce621ca4e2" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.806995 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba28de5d-9cc0-475b-9eb5-6fce621ca4e2" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.809301 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener-log" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.810220 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener-log" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.810285 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793c680c-7448-478e-bbf4-bca888e7e4c9" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.810341 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="793c680c-7448-478e-bbf4-bca888e7e4c9" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.810403 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.810450 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.810509 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.810583 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.809578 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.807850 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p98cp\" (UniqueName: \"kubernetes.io/projected/ba28de5d-9cc0-475b-9eb5-6fce621ca4e2-kube-api-access-p98cp\") pod \"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2\" (UID: \"ba28de5d-9cc0-475b-9eb5-6fce621ca4e2\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.820077 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w98zv\" (UniqueName: \"kubernetes.io/projected/793c680c-7448-478e-bbf4-bca888e7e4c9-kube-api-access-w98zv\") pod \"793c680c-7448-478e-bbf4-bca888e7e4c9\" (UID: \"793c680c-7448-478e-bbf4-bca888e7e4c9\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.820324 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkhqn\" (UniqueName: \"kubernetes.io/projected/944eaf5b-6552-409a-a932-7fceaf182ff7-kube-api-access-dkhqn\") pod \"944eaf5b-6552-409a-a932-7fceaf182ff7\" (UID: \"944eaf5b-6552-409a-a932-7fceaf182ff7\") " Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.821048 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.810067 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.822717 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.822928 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn456\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-kube-api-access-kn456\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.822950 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c0347-3494-4451-83b3-9919dd346f19-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.822962 4909 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.822974 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9qbb\" (UniqueName: \"kubernetes.io/projected/d79c0347-3494-4451-83b3-9919dd346f19-kube-api-access-l9qbb\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.822983 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.815994 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7382debb-3dc4-4849-9109-5d415c6a196f-kube-api-access-wv7ck" (OuterVolumeSpecName: "kube-api-access-wv7ck") pod "7382debb-3dc4-4849-9109-5d415c6a196f" (UID: "7382debb-3dc4-4849-9109-5d415c6a196f"). InnerVolumeSpecName "kube-api-access-wv7ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.809808 4909 generic.go:334] "Generic (PLEG): container finished" podID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerID="930ad0d120558b695b7edf3c8655a0626d86ed2453cbe13f094dc27931f585b5" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.826423 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.826617 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944eaf5b-6552-409a-a932-7fceaf182ff7-kube-api-access-dkhqn" (OuterVolumeSpecName: "kube-api-access-dkhqn") pod "944eaf5b-6552-409a-a932-7fceaf182ff7" (UID: "944eaf5b-6552-409a-a932-7fceaf182ff7"). InnerVolumeSpecName "kube-api-access-dkhqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.837244 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.837479 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.840231 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793c680c-7448-478e-bbf4-bca888e7e4c9-kube-api-access-w98zv" (OuterVolumeSpecName: "kube-api-access-w98zv") pod "793c680c-7448-478e-bbf4-bca888e7e4c9" (UID: "793c680c-7448-478e-bbf4-bca888e7e4c9"). InnerVolumeSpecName "kube-api-access-w98zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.844519 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="ovsdbserver-nb" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.844555 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="ovsdbserver-nb" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.844571 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.844581 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.844614 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" containerName="mysql-bootstrap" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.844620 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" containerName="mysql-bootstrap" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.844641 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker-log" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.844647 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker-log" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.844668 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.844674 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846019 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener-log" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846042 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="ovsdbserver-sb" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846055 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846067 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="ovsdbserver-nb" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846076 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" containerName="galera" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846088 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74aad93-58f0-4023-95e3-3f0e92558f84" containerName="ovn-controller" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846094 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="793c680c-7448-478e-bbf4-bca888e7e4c9" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846101 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" containerName="barbican-keystone-listener" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846110 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-server" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846122 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea1ebb8-6827-4f0b-a055-3b77e18755ac" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846129 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" containerName="proxy-httpd" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846137 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="acbd9367-38fb-4a1d-b818-c0bd4893c0de" containerName="dnsmasq-dns" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846150 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd763da-b7ea-4a61-846c-029eb54d9a08" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846159 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="944eaf5b-6552-409a-a932-7fceaf182ff7" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846169 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846187 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79c0347-3494-4451-83b3-9919dd346f19" containerName="barbican-worker-log" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846195 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ffd03c-7228-474b-830e-01f0be8998d5" containerName="openstack-network-exporter" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.846207 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba28de5d-9cc0-475b-9eb5-6fce621ca4e2" containerName="mariadb-account-delete" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.846869 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.846946 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="ovn-northd" Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.847355 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.847375 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.848173 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone2db1-account-delete-24jqj"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.848201 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62","Type":"ContainerDied","Data":"930ad0d120558b695b7edf3c8655a0626d86ed2453cbe13f094dc27931f585b5"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.848295 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.851720 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.852965 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:24 crc kubenswrapper[4909]: E1126 07:22:24.853034 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.865923 4909 generic.go:334] "Generic (PLEG): container finished" podID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerID="3e62a202acc19dddd034b5dca03867a48ca15be9ff76077f42a4246722cebcdf" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.866031 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07095ffe-adde-4857-93db-5a02f0adf9e6","Type":"ContainerDied","Data":"3e62a202acc19dddd034b5dca03867a48ca15be9ff76077f42a4246722cebcdf"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.870798 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba28de5d-9cc0-475b-9eb5-6fce621ca4e2-kube-api-access-p98cp" (OuterVolumeSpecName: "kube-api-access-p98cp") pod "ba28de5d-9cc0-475b-9eb5-6fce621ca4e2" (UID: "ba28de5d-9cc0-475b-9eb5-6fce621ca4e2"). InnerVolumeSpecName "kube-api-access-p98cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.891995 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2db1-account-create-2rztp"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.897748 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone2db1-account-delete-24jqj"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.898513 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" event={"ID":"7382debb-3dc4-4849-9109-5d415c6a196f","Type":"ContainerDied","Data":"ad06187f176d483a88b61c58bef46d417ba51abeafb0a81f785e58cf8a7f3e9a"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.898625 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85d774bbbb-slpbz" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.901221 4909 generic.go:334] "Generic (PLEG): container finished" podID="0b222993-a4da-4936-807a-9e99c637bc27" containerID="85ed4d9b155d719cd9be008a478f8c30dd565960a367abc54ca0b3cdd6d157e2" exitCode=2 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.901268 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b222993-a4da-4936-807a-9e99c637bc27","Type":"ContainerDied","Data":"85ed4d9b155d719cd9be008a478f8c30dd565960a367abc54ca0b3cdd6d157e2"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.903722 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2db1-account-create-2rztp"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.904626 4909 generic.go:334] "Generic (PLEG): container finished" podID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerID="bd1ad1420e63a2b12ad289deaed447ee9e8f36e1917943a95d281b6236ae4f9e" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.904698 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88","Type":"ContainerDied","Data":"bd1ad1420e63a2b12ad289deaed447ee9e8f36e1917943a95d281b6236ae4f9e"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.915947 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-82vbv"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.923756 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-82vbv"] Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.925018 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv7ck\" (UniqueName: \"kubernetes.io/projected/7382debb-3dc4-4849-9109-5d415c6a196f-kube-api-access-wv7ck\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.925032 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p98cp\" (UniqueName: \"kubernetes.io/projected/ba28de5d-9cc0-475b-9eb5-6fce621ca4e2-kube-api-access-p98cp\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.925041 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w98zv\" (UniqueName: \"kubernetes.io/projected/793c680c-7448-478e-bbf4-bca888e7e4c9-kube-api-access-w98zv\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.925051 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkhqn\" (UniqueName: \"kubernetes.io/projected/944eaf5b-6552-409a-a932-7fceaf182ff7-kube-api-access-dkhqn\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.926247 4909 generic.go:334] "Generic (PLEG): container finished" podID="bc036cf2-920c-4497-bec8-cbf0d293c33a" containerID="ef7a513521775234881fd5ee1e7482c3b02487c963f7979e7ddc36cab4590a3e" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.926338 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placemente922-account-delete-ccdmj" event={"ID":"bc036cf2-920c-4497-bec8-cbf0d293c33a","Type":"ContainerDied","Data":"ef7a513521775234881fd5ee1e7482c3b02487c963f7979e7ddc36cab4590a3e"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.933342 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.934424 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7382debb-3dc4-4849-9109-5d415c6a196f" (UID: "7382debb-3dc4-4849-9109-5d415c6a196f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.949929 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-567d49d699-wbzsj" event={"ID":"3c7b32b7-50e7-48b0-8027-a6d0c85d6d32","Type":"ContainerDied","Data":"4fd891ac62fd450211d95085d96ee6dfb80d84d55ac3f44183d0b65a8979e7df"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.951230 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-567d49d699-wbzsj" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.960708 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d79c0347-3494-4451-83b3-9919dd346f19" (UID: "d79c0347-3494-4451-83b3-9919dd346f19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.961795 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell050ac-account-delete-2nvln" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.961839 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell050ac-account-delete-2nvln" event={"ID":"793c680c-7448-478e-bbf4-bca888e7e4c9","Type":"ContainerDied","Data":"afa67d7abffb104a9a0a0133de943389f52ee43d62d6e143ae42e5fe8b547f15"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.978717 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6489c4db99-sc69l" event={"ID":"d79c0347-3494-4451-83b3-9919dd346f19","Type":"ContainerDied","Data":"bb68cdcebf6c3e373dd1c0efde12a52465d23642dcdbe936042b900c1232fa2c"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.978849 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6489c4db99-sc69l" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.984328 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data" (OuterVolumeSpecName: "config-data") pod "d79c0347-3494-4451-83b3-9919dd346f19" (UID: "d79c0347-3494-4451-83b3-9919dd346f19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.986561 4909 generic.go:334] "Generic (PLEG): container finished" podID="6791905e-4b74-417e-bc1b-0747eac5878e" containerID="9af326558746ae1f5b6fd43ef25bfc03798d1031c68245c1a4e7bd66e604b033" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.986627 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6791905e-4b74-417e-bc1b-0747eac5878e","Type":"ContainerDied","Data":"9af326558746ae1f5b6fd43ef25bfc03798d1031c68245c1a4e7bd66e604b033"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.989610 4909 generic.go:334] "Generic (PLEG): container finished" podID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerID="a05a2e8d981ebb4cc5877598dba394fd26e24d76c6a72edee8536bc2f0214b86" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.989652 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fc4c8f8d8-g2ccp" event={"ID":"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd","Type":"ContainerDied","Data":"a05a2e8d981ebb4cc5877598dba394fd26e24d76c6a72edee8536bc2f0214b86"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.989669 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5fc4c8f8d8-g2ccp" event={"ID":"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd","Type":"ContainerDied","Data":"d1360c0fbd5ca81e6923bb6f1040c56e102db54539a428875d4a98ea9d046a6d"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.989681 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1360c0fbd5ca81e6923bb6f1040c56e102db54539a428875d4a98ea9d046a6d" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.992431 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data" (OuterVolumeSpecName: "config-data") pod "7382debb-3dc4-4849-9109-5d415c6a196f" (UID: "7382debb-3dc4-4849-9109-5d415c6a196f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.992558 4909 generic.go:334] "Generic (PLEG): container finished" podID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerID="d849337821d8217076eb9a9d55645f97c144b965bd6ef5def3a986ec27b0c502" exitCode=0 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.992581 4909 generic.go:334] "Generic (PLEG): container finished" podID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerID="043f2d799dc68a009facf1b7538ebe9df2d186af807ec3bf0beb95026499894c" exitCode=2 Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.992678 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.993438 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.995053 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerDied","Data":"d849337821d8217076eb9a9d55645f97c144b965bd6ef5def3a986ec27b0c502"} Nov 26 07:22:24 crc kubenswrapper[4909]: I1126 07:22:24.995083 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerDied","Data":"043f2d799dc68a009facf1b7538ebe9df2d186af807ec3bf0beb95026499894c"} Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.003756 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.016732 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-config-data" (OuterVolumeSpecName: "config-data") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.017616 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" (UID: "3c7b32b7-50e7-48b0-8027-a6d0c85d6d32"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029252 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9d9j\" (UniqueName: \"kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j\") pod \"keystone2db1-account-delete-24jqj\" (UID: \"6e5dbbfc-d1fe-4335-b9fd-653192dd45ff\") " pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029373 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029387 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029395 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029405 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7382debb-3dc4-4849-9109-5d415c6a196f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029417 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029426 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029435 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c0347-3494-4451-83b3-9919dd346f19-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.029443 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.112764 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell050ac-account-delete-2nvln"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.120138 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell050ac-account-delete-2nvln"] Nov 26 07:22:25 crc kubenswrapper[4909]: E1126 07:22:25.121505 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-g9d9j], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone2db1-account-delete-24jqj" podUID="6e5dbbfc-d1fe-4335-b9fd-653192dd45ff" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.130246 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.131438 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9d9j\" (UniqueName: \"kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j\") pod \"keystone2db1-account-delete-24jqj\" (UID: \"6e5dbbfc-d1fe-4335-b9fd-653192dd45ff\") " pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:25 crc kubenswrapper[4909]: E1126 07:22:25.148632 4909 projected.go:194] Error preparing data for projected volume kube-api-access-g9d9j for pod openstack/keystone2db1-account-delete-24jqj: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 26 07:22:25 crc kubenswrapper[4909]: E1126 07:22:25.148722 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j podName:6e5dbbfc-d1fe-4335-b9fd-653192dd45ff nodeName:}" failed. No retries permitted until 2025-11-26 07:22:25.648696311 +0000 UTC m=+1317.794907477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g9d9j" (UniqueName: "kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j") pod "keystone2db1-account-delete-24jqj" (UID: "6e5dbbfc-d1fe-4335-b9fd-653192dd45ff") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.158314 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.158394 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.166146 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.168180 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.174172 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.188343 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutroncf0d-account-delete-8ssk2"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.203663 4909 scope.go:117] "RemoveContainer" containerID="f378d7762407801075bfcb45c33cc2c4c74d6b521a7a2e5f2082de9945e6ffe6" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.214843 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutroncf0d-account-delete-8ssk2"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.246192 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerName="galera" containerID="cri-o://e851027dc323ea0e4c8353f7e34bc561fcaf6af5e2c46334b9918eabe5ff4a83" gracePeriod=30 Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.275616 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.280212 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-85d774bbbb-slpbz"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.291203 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.317418 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.330724 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-85d774bbbb-slpbz"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.332044 4909 scope.go:117] "RemoveContainer" containerID="5f08c808496e869b52d1277ac27949d8fe1c649888be137aecb9eaeeed0c8968" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.332617 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.334008 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s5pl\" (UniqueName: \"kubernetes.io/projected/0fdde234-058b-4e39-a647-b87669d3fda5-kube-api-access-6s5pl\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.334161 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-public-tls-certs\") pod \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.334312 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.334448 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-internal-tls-certs\") pod \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.334548 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-config-data\") pod \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335025 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-scripts\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335134 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-combined-ca-bundle\") pod \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335203 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-internal-tls-certs\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335264 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz4vv\" (UniqueName: \"kubernetes.io/projected/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-kube-api-access-zz4vv\") pod \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335342 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-scripts\") pod \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335412 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-logs\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335717 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-config-data\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.335880 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-combined-ca-bundle\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.336004 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-logs\") pod \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\" (UID: \"9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.336296 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-httpd-run\") pod \"0fdde234-058b-4e39-a647-b87669d3fda5\" (UID: \"0fdde234-058b-4e39-a647-b87669d3fda5\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.344307 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.345260 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-logs" (OuterVolumeSpecName: "logs") pod "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" (UID: "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.346360 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdde234-058b-4e39-a647-b87669d3fda5-kube-api-access-6s5pl" (OuterVolumeSpecName: "kube-api-access-6s5pl") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "kube-api-access-6s5pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.348291 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-logs" (OuterVolumeSpecName: "logs") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.348884 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.347412 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.353246 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-567d49d699-wbzsj"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.355938 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-scripts" (OuterVolumeSpecName: "scripts") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.367689 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell18b90-account-delete-n7278" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.372212 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-567d49d699-wbzsj"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.372913 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-kube-api-access-zz4vv" (OuterVolumeSpecName: "kube-api-access-zz4vv") pod "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" (UID: "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd"). InnerVolumeSpecName "kube-api-access-zz4vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.380970 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6489c4db99-sc69l"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.382918 4909 scope.go:117] "RemoveContainer" containerID="c85db5adf812e59df5035d902468336440f55176401465a21bde6db65d9ef632" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.383460 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-scripts" (OuterVolumeSpecName: "scripts") pod "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" (UID: "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.387442 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.396676 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6489c4db99-sc69l"] Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.428182 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438413 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data-custom\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438699 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-config\") pod \"0b222993-a4da-4936-807a-9e99c637bc27\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438722 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-logs\") pod \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438746 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-internal-tls-certs\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438761 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-config-data\") pod \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438791 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438812 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-combined-ca-bundle\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438833 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-combined-ca-bundle\") pod \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438866 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-internal-tls-certs\") pod \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438954 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25cn\" (UniqueName: \"kubernetes.io/projected/0b222993-a4da-4936-807a-9e99c637bc27-kube-api-access-h25cn\") pod \"0b222993-a4da-4936-807a-9e99c637bc27\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.438974 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-public-tls-certs\") pod \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439002 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-config-data\") pod \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439018 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjw9\" (UniqueName: \"kubernetes.io/projected/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-kube-api-access-msjw9\") pod \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439049 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-httpd-run\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439091 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07095ffe-adde-4857-93db-5a02f0adf9e6-etc-machine-id\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439135 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-combined-ca-bundle\") pod \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439179 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-logs\") pod \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439197 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439217 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2d6c\" (UniqueName: \"kubernetes.io/projected/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-kube-api-access-q2d6c\") pod \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\" (UID: \"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439247 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-public-tls-certs\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439266 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-certs\") pod \"0b222993-a4da-4936-807a-9e99c637bc27\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439290 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07095ffe-adde-4857-93db-5a02f0adf9e6-logs\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439563 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07095ffe-adde-4857-93db-5a02f0adf9e6-logs" (OuterVolumeSpecName: "logs") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439633 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-combined-ca-bundle\") pod \"0b222993-a4da-4936-807a-9e99c637bc27\" (UID: \"0b222993-a4da-4936-807a-9e99c637bc27\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439652 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-scripts\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439680 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8czxx\" (UniqueName: \"kubernetes.io/projected/07095ffe-adde-4857-93db-5a02f0adf9e6-kube-api-access-8czxx\") pod \"07095ffe-adde-4857-93db-5a02f0adf9e6\" (UID: \"07095ffe-adde-4857-93db-5a02f0adf9e6\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.439694 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-nova-metadata-tls-certs\") pod \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\" (UID: \"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440102 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07095ffe-adde-4857-93db-5a02f0adf9e6-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440122 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440131 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440141 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440150 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz4vv\" (UniqueName: \"kubernetes.io/projected/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-kube-api-access-zz4vv\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440158 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440166 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440174 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440183 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440191 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fdde234-058b-4e39-a647-b87669d3fda5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.440199 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s5pl\" (UniqueName: \"kubernetes.io/projected/0fdde234-058b-4e39-a647-b87669d3fda5-kube-api-access-6s5pl\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.442578 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-logs" (OuterVolumeSpecName: "logs") pod "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" (UID: "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.449632 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07095ffe-adde-4857-93db-5a02f0adf9e6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.450579 4909 scope.go:117] "RemoveContainer" containerID="448123d43785c504b22d8f7af78abefd489be91367d82b1fa2c04ab27f96653f" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.452604 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.452891 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-logs" (OuterVolumeSpecName: "logs") pod "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" (UID: "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.453002 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts" (OuterVolumeSpecName: "scripts") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.455378 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.459615 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-scripts" (OuterVolumeSpecName: "scripts") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.459781 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" (UID: "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.463810 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b222993-a4da-4936-807a-9e99c637bc27-kube-api-access-h25cn" (OuterVolumeSpecName: "kube-api-access-h25cn") pod "0b222993-a4da-4936-807a-9e99c637bc27" (UID: "0b222993-a4da-4936-807a-9e99c637bc27"). InnerVolumeSpecName "kube-api-access-h25cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.488283 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07095ffe-adde-4857-93db-5a02f0adf9e6-kube-api-access-8czxx" (OuterVolumeSpecName: "kube-api-access-8czxx") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "kube-api-access-8czxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.488400 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-kube-api-access-msjw9" (OuterVolumeSpecName: "kube-api-access-msjw9") pod "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" (UID: "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62"). InnerVolumeSpecName "kube-api-access-msjw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.517496 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.522808 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-kube-api-access-q2d6c" (OuterVolumeSpecName: "kube-api-access-q2d6c") pod "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" (UID: "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88"). InnerVolumeSpecName "kube-api-access-q2d6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.522992 4909 scope.go:117] "RemoveContainer" containerID="06b62a49cf46e07b4f7ce61be83af8cadb8ad382322ec95765c2a554c930637b" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.541488 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-logs\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.541566 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhz7\" (UniqueName: \"kubernetes.io/projected/93d42f19-cfd6-4b06-aaf2-8febb4bd3945-kube-api-access-swhz7\") pod \"93d42f19-cfd6-4b06-aaf2-8febb4bd3945\" (UID: \"93d42f19-cfd6-4b06-aaf2-8febb4bd3945\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.541719 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-public-tls-certs\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.541850 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f77tf\" (UniqueName: \"kubernetes.io/projected/6791905e-4b74-417e-bc1b-0747eac5878e-kube-api-access-f77tf\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.541928 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.541959 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-config-data\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.541973 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-combined-ca-bundle\") pod \"6791905e-4b74-417e-bc1b-0747eac5878e\" (UID: \"6791905e-4b74-417e-bc1b-0747eac5878e\") " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543108 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07095ffe-adde-4857-93db-5a02f0adf9e6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543122 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543132 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2d6c\" (UniqueName: \"kubernetes.io/projected/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-kube-api-access-q2d6c\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543143 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543152 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543161 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543170 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8czxx\" (UniqueName: \"kubernetes.io/projected/07095ffe-adde-4857-93db-5a02f0adf9e6-kube-api-access-8czxx\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543178 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543186 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543195 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543204 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h25cn\" (UniqueName: \"kubernetes.io/projected/0b222993-a4da-4936-807a-9e99c637bc27-kube-api-access-h25cn\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543213 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msjw9\" (UniqueName: \"kubernetes.io/projected/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-kube-api-access-msjw9\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543221 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.543173 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-logs" (OuterVolumeSpecName: "logs") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.570018 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d42f19-cfd6-4b06-aaf2-8febb4bd3945-kube-api-access-swhz7" (OuterVolumeSpecName: "kube-api-access-swhz7") pod "93d42f19-cfd6-4b06-aaf2-8febb4bd3945" (UID: "93d42f19-cfd6-4b06-aaf2-8febb4bd3945"). InnerVolumeSpecName "kube-api-access-swhz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.572514 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6791905e-4b74-417e-bc1b-0747eac5878e-kube-api-access-f77tf" (OuterVolumeSpecName: "kube-api-access-f77tf") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "kube-api-access-f77tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.573013 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.600337 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-config-data" (OuterVolumeSpecName: "config-data") pod "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" (UID: "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.606019 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "0b222993-a4da-4936-807a-9e99c637bc27" (UID: "0b222993-a4da-4936-807a-9e99c637bc27"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.647243 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swhz7\" (UniqueName: \"kubernetes.io/projected/93d42f19-cfd6-4b06-aaf2-8febb4bd3945-kube-api-access-swhz7\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.647291 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f77tf\" (UniqueName: \"kubernetes.io/projected/6791905e-4b74-417e-bc1b-0747eac5878e-kube-api-access-f77tf\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.647304 4909 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.647318 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.647351 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.647364 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6791905e-4b74-417e-bc1b-0747eac5878e-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.752965 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9d9j\" (UniqueName: \"kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j\") pod \"keystone2db1-account-delete-24jqj\" (UID: \"6e5dbbfc-d1fe-4335-b9fd-653192dd45ff\") " pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:25 crc kubenswrapper[4909]: E1126 07:22:25.776047 4909 projected.go:194] Error preparing data for projected volume kube-api-access-g9d9j for pod openstack/keystone2db1-account-delete-24jqj: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 26 07:22:25 crc kubenswrapper[4909]: E1126 07:22:25.776158 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j podName:6e5dbbfc-d1fe-4335-b9fd-653192dd45ff nodeName:}" failed. No retries permitted until 2025-11-26 07:22:26.776129148 +0000 UTC m=+1318.922340314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-g9d9j" (UniqueName: "kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j") pod "keystone2db1-account-delete-24jqj" (UID: "6e5dbbfc-d1fe-4335-b9fd-653192dd45ff") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.804979 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" (UID: "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.808155 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-config-data" (OuterVolumeSpecName: "config-data") pod "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" (UID: "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.809089 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" (UID: "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.833721 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.855962 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.855994 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.856097 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.856112 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.874391 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.883801 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" (UID: "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.885796 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "0b222993-a4da-4936-807a-9e99c637bc27" (UID: "0b222993-a4da-4936-807a-9e99c637bc27"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.926808 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" (UID: "9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.928948 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.953681 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.958613 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.958646 4909 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.958659 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.958668 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.958703 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.958713 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:25 crc kubenswrapper[4909]: I1126 07:22:25.979931 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" (UID: "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.006272 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.007815 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b222993-a4da-4936-807a-9e99c637bc27" (UID: "0b222993-a4da-4936-807a-9e99c637bc27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.007876 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-config-data" (OuterVolumeSpecName: "config-data") pod "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" (UID: "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.008080 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-config-data" (OuterVolumeSpecName: "config-data") pod "0fdde234-058b-4e39-a647-b87669d3fda5" (UID: "0fdde234-058b-4e39-a647-b87669d3fda5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.011822 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" (UID: "76566d98-8a97-4bd6-9a1c-ae8c0eee9d88"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.012113 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-config-data" (OuterVolumeSpecName: "config-data") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.018788 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data" (OuterVolumeSpecName: "config-data") pod "07095ffe-adde-4857-93db-5a02f0adf9e6" (UID: "07095ffe-adde-4857-93db-5a02f0adf9e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.023913 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placemente922-account-delete-ccdmj" event={"ID":"bc036cf2-920c-4497-bec8-cbf0d293c33a","Type":"ContainerDied","Data":"bd0bd9845229a3f8839101bf9448f6ec9c9d468e93752d55ca93e1c3f4b4b087"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.023955 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd0bd9845229a3f8839101bf9448f6ec9c9d468e93752d55ca93e1c3f4b4b087" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.029332 4909 generic.go:334] "Generic (PLEG): container finished" podID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerID="18c69a50eb20ebbdeb9c3c4cec5b96f232a261f134a17ae2bf389ddcaf0b29a6" exitCode=0 Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.029378 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8597f74f8-cp26v" event={"ID":"1746d8cc-9394-471e-a1c3-5471e65dfc73","Type":"ContainerDied","Data":"18c69a50eb20ebbdeb9c3c4cec5b96f232a261f134a17ae2bf389ddcaf0b29a6"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.029426 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8597f74f8-cp26v" event={"ID":"1746d8cc-9394-471e-a1c3-5471e65dfc73","Type":"ContainerDied","Data":"0c4c7977ab4eeaa306f1e287c8fad2c1f497474a57492a16ae989c545aff5a22"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.029436 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4c7977ab4eeaa306f1e287c8fad2c1f497474a57492a16ae989c545aff5a22" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.034332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"07095ffe-adde-4857-93db-5a02f0adf9e6","Type":"ContainerDied","Data":"5d8cfe480952ccec6c0dc1f80befee80044f33aba629dc5a87631947666ca62b"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.034483 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.056571 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76566d98-8a97-4bd6-9a1c-ae8c0eee9d88","Type":"ContainerDied","Data":"2ef100cb9143ba5408365743d3f8bc0712d229d77f99f46a201ecaef463a0656"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.056607 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.058844 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6791905e-4b74-417e-bc1b-0747eac5878e" (UID: "6791905e-4b74-417e-bc1b-0747eac5878e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.062611 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b222993-a4da-4936-807a-9e99c637bc27","Type":"ContainerDied","Data":"bea3f7be85f565776b96e6bf69994caf8e89e1606c65a8fbe7d726497d89357e"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.062698 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.070114 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fdde234-058b-4e39-a647-b87669d3fda5","Type":"ContainerDied","Data":"4778216886b7c052c38eb6655f0ac9e2b5ab33d3d886636dfd118e1069502765"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.070135 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.073229 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.073275 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6791905e-4b74-417e-bc1b-0747eac5878e","Type":"ContainerDied","Data":"78f1adaa8690f5dd5cda40c513600bbf47bbcd1e19dbec596d90cf360a9cce71"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.073661 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b222993-a4da-4936-807a-9e99c637bc27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075099 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075117 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791905e-4b74-417e-bc1b-0747eac5878e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075127 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde234-058b-4e39-a647-b87669d3fda5-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075156 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075166 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075177 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075186 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.075196 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07095ffe-adde-4857-93db-5a02f0adf9e6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.077217 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell18b90-account-delete-n7278" event={"ID":"93d42f19-cfd6-4b06-aaf2-8febb4bd3945","Type":"ContainerDied","Data":"6d86292d1bea790cc218aff640cee6dd38bab6373add5ae7dac4a5963a0e669f"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.077362 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell18b90-account-delete-n7278" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.079752 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi1964-account-delete-vmj4p" event={"ID":"c468acce-9341-4eff-94c9-f38b74077fdf","Type":"ContainerDied","Data":"e3282db93757a8ba36a77ee24813824870d5393e3122ad327f1d798a44729554"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.079784 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3282db93757a8ba36a77ee24813824870d5393e3122ad327f1d798a44729554" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.084230 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" (UID: "c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.085734 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicana478-account-delete-9mdlb" event={"ID":"89513daa-9a0c-4888-9a33-0ba9c007da26","Type":"ContainerDied","Data":"470d455335109ddcd9477ac526c5c9a0c54c75aecb96b4b4a66e0e0ecffca745"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.085760 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="470d455335109ddcd9477ac526c5c9a0c54c75aecb96b4b4a66e0e0ecffca745" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.087602 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62","Type":"ContainerDied","Data":"1a7910f31f802a31ca5ab351df54a0452561f7fd74cb0bf0679319c74c688dff"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.087721 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.092372 4909 generic.go:334] "Generic (PLEG): container finished" podID="4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" containerID="c6f4d8f0f4e61dc21f3c450f4b1e6411451bb293632a6aacfce4bc571716303d" exitCode=0 Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.092465 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7","Type":"ContainerDied","Data":"c6f4d8f0f4e61dc21f3c450f4b1e6411451bb293632a6aacfce4bc571716303d"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.097456 4909 generic.go:334] "Generic (PLEG): container finished" podID="edba305d-f8e6-4ab0-ae68-30b668037813" containerID="83f2fa0df126cd84a93da94a384252310d087fec1c7f6c1abf2c21ba3382de98" exitCode=0 Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.097555 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edba305d-f8e6-4ab0-ae68-30b668037813","Type":"ContainerDied","Data":"83f2fa0df126cd84a93da94a384252310d087fec1c7f6c1abf2c21ba3382de98"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.097603 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edba305d-f8e6-4ab0-ae68-30b668037813","Type":"ContainerDied","Data":"5e22ed2f01780cbace706870709638e27f61418d8b9438137022b70f7bab9af6"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.097617 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e22ed2f01780cbace706870709638e27f61418d8b9438137022b70f7bab9af6" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.108241 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance2354-account-delete-qnktx" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.111610 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112138 4909 generic.go:334] "Generic (PLEG): container finished" podID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerID="51f26cf80d8da853fb9da8dc0fafd164d9ce6124c41fe4293c3704ab70a1c633" exitCode=0 Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112190 4909 generic.go:334] "Generic (PLEG): container finished" podID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerID="2a498b57189a43f970e29d0e8040bbe8756423b8463ab0d366df0dad3c6b6fa0" exitCode=0 Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112289 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112434 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerDied","Data":"51f26cf80d8da853fb9da8dc0fafd164d9ce6124c41fe4293c3704ab70a1c633"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112485 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerDied","Data":"2a498b57189a43f970e29d0e8040bbe8756423b8463ab0d366df0dad3c6b6fa0"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112499 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bdc8ae5-e147-48a9-91d5-1f2425e2b379","Type":"ContainerDied","Data":"cc26ec16cdbbbf1e5d30fb8087a8bc4ef1981749f48beef8a580156f5d5b2bb4"} Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112508 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc26ec16cdbbbf1e5d30fb8087a8bc4ef1981749f48beef8a580156f5d5b2bb4" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112569 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5fc4c8f8d8-g2ccp" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.112917 4909 scope.go:117] "RemoveContainer" containerID="880e17b02d4c9dab1267c346c106365cd7c194623167e4792837a0d418d59a8f" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.160897 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1964-account-delete-vmj4p" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.177028 4909 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.179700 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.184021 4909 scope.go:117] "RemoveContainer" containerID="198a28e1346d287b3f7810156c663f6c5e14b09f746144f9f4aee54002dbd0a2" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.190779 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicana478-account-delete-9mdlb" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.207167 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.208528 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placemente922-account-delete-ccdmj" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.216264 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.217789 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.231818 4909 scope.go:117] "RemoveContainer" containerID="58fff6394c3033d5aea46706541b169e8d37fa6d62b5f97b12a15d03cc97891e" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.238399 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.239015 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.239112 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.252632 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.259185 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282266 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data\") pod \"1746d8cc-9394-471e-a1c3-5471e65dfc73\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282359 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-public-tls-certs\") pod \"1746d8cc-9394-471e-a1c3-5471e65dfc73\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282471 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data-custom\") pod \"1746d8cc-9394-471e-a1c3-5471e65dfc73\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282562 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-combined-ca-bundle\") pod \"1746d8cc-9394-471e-a1c3-5471e65dfc73\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282585 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7ndl\" (UniqueName: \"kubernetes.io/projected/c468acce-9341-4eff-94c9-f38b74077fdf-kube-api-access-c7ndl\") pod \"c468acce-9341-4eff-94c9-f38b74077fdf\" (UID: \"c468acce-9341-4eff-94c9-f38b74077fdf\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282629 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-internal-tls-certs\") pod \"1746d8cc-9394-471e-a1c3-5471e65dfc73\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282678 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1746d8cc-9394-471e-a1c3-5471e65dfc73-logs\") pod \"1746d8cc-9394-471e-a1c3-5471e65dfc73\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.282698 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75kkn\" (UniqueName: \"kubernetes.io/projected/1746d8cc-9394-471e-a1c3-5471e65dfc73-kube-api-access-75kkn\") pod \"1746d8cc-9394-471e-a1c3-5471e65dfc73\" (UID: \"1746d8cc-9394-471e-a1c3-5471e65dfc73\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.286602 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1746d8cc-9394-471e-a1c3-5471e65dfc73-kube-api-access-75kkn" (OuterVolumeSpecName: "kube-api-access-75kkn") pod "1746d8cc-9394-471e-a1c3-5471e65dfc73" (UID: "1746d8cc-9394-471e-a1c3-5471e65dfc73"). InnerVolumeSpecName "kube-api-access-75kkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.287718 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.292321 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1746d8cc-9394-471e-a1c3-5471e65dfc73-logs" (OuterVolumeSpecName: "logs") pod "1746d8cc-9394-471e-a1c3-5471e65dfc73" (UID: "1746d8cc-9394-471e-a1c3-5471e65dfc73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.292396 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5fc4c8f8d8-g2ccp"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.293615 4909 scope.go:117] "RemoveContainer" containerID="8d96446636d1c32bd33935c854eafae7f92ad00599e940eb16e0ee0ad1233ddc" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.302603 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1746d8cc-9394-471e-a1c3-5471e65dfc73" (UID: "1746d8cc-9394-471e-a1c3-5471e65dfc73"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.305299 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5fc4c8f8d8-g2ccp"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.308308 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c468acce-9341-4eff-94c9-f38b74077fdf-kube-api-access-c7ndl" (OuterVolumeSpecName: "kube-api-access-c7ndl") pod "c468acce-9341-4eff-94c9-f38b74077fdf" (UID: "c468acce-9341-4eff-94c9-f38b74077fdf"). InnerVolumeSpecName "kube-api-access-c7ndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.313322 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1746d8cc-9394-471e-a1c3-5471e65dfc73" (UID: "1746d8cc-9394-471e-a1c3-5471e65dfc73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.322774 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.332672 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.341817 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.342530 4909 scope.go:117] "RemoveContainer" containerID="60fee10fbe2a728536d6c3503ed6b4b03afa53a6b02b3aa79491273e4536b60d" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.353501 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data" (OuterVolumeSpecName: "config-data") pod "1746d8cc-9394-471e-a1c3-5471e65dfc73" (UID: "1746d8cc-9394-471e-a1c3-5471e65dfc73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.355359 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.371349 4909 scope.go:117] "RemoveContainer" containerID="3e62a202acc19dddd034b5dca03867a48ca15be9ff76077f42a4246722cebcdf" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.373065 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1746d8cc-9394-471e-a1c3-5471e65dfc73" (UID: "1746d8cc-9394-471e-a1c3-5471e65dfc73"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384007 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-sg-core-conf-yaml\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384067 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmj8w\" (UniqueName: \"kubernetes.io/projected/bc036cf2-920c-4497-bec8-cbf0d293c33a-kube-api-access-rmj8w\") pod \"bc036cf2-920c-4497-bec8-cbf0d293c33a\" (UID: \"bc036cf2-920c-4497-bec8-cbf0d293c33a\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384102 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-combined-ca-bundle\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384324 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data\") pod \"edba305d-f8e6-4ab0-ae68-30b668037813\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384374 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edba305d-f8e6-4ab0-ae68-30b668037813-etc-machine-id\") pod \"edba305d-f8e6-4ab0-ae68-30b668037813\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384400 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64xch\" (UniqueName: \"kubernetes.io/projected/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-kube-api-access-64xch\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384466 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-config-data\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384500 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-scripts\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384549 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edba305d-f8e6-4ab0-ae68-30b668037813-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "edba305d-f8e6-4ab0-ae68-30b668037813" (UID: "edba305d-f8e6-4ab0-ae68-30b668037813"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384613 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-ceilometer-tls-certs\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384671 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-log-httpd\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384698 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbdw\" (UniqueName: \"kubernetes.io/projected/edba305d-f8e6-4ab0-ae68-30b668037813-kube-api-access-zrbdw\") pod \"edba305d-f8e6-4ab0-ae68-30b668037813\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384744 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data-custom\") pod \"edba305d-f8e6-4ab0-ae68-30b668037813\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384782 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js44f\" (UniqueName: \"kubernetes.io/projected/89513daa-9a0c-4888-9a33-0ba9c007da26-kube-api-access-js44f\") pod \"89513daa-9a0c-4888-9a33-0ba9c007da26\" (UID: \"89513daa-9a0c-4888-9a33-0ba9c007da26\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384808 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-run-httpd\") pod \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\" (UID: \"3bdc8ae5-e147-48a9-91d5-1f2425e2b379\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384829 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-combined-ca-bundle\") pod \"edba305d-f8e6-4ab0-ae68-30b668037813\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.384856 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-scripts\") pod \"edba305d-f8e6-4ab0-ae68-30b668037813\" (UID: \"edba305d-f8e6-4ab0-ae68-30b668037813\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385317 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385342 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edba305d-f8e6-4ab0-ae68-30b668037813-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385356 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385369 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7ndl\" (UniqueName: \"kubernetes.io/projected/c468acce-9341-4eff-94c9-f38b74077fdf-kube-api-access-c7ndl\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385384 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385397 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1746d8cc-9394-471e-a1c3-5471e65dfc73-logs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385408 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75kkn\" (UniqueName: \"kubernetes.io/projected/1746d8cc-9394-471e-a1c3-5471e65dfc73-kube-api-access-75kkn\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.385419 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.388286 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell18b90-account-delete-n7278"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.389436 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.391278 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.391741 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell18b90-account-delete-n7278"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.392937 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-scripts" (OuterVolumeSpecName: "scripts") pod "edba305d-f8e6-4ab0-ae68-30b668037813" (UID: "edba305d-f8e6-4ab0-ae68-30b668037813"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.393432 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89513daa-9a0c-4888-9a33-0ba9c007da26-kube-api-access-js44f" (OuterVolumeSpecName: "kube-api-access-js44f") pod "89513daa-9a0c-4888-9a33-0ba9c007da26" (UID: "89513daa-9a0c-4888-9a33-0ba9c007da26"). InnerVolumeSpecName "kube-api-access-js44f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.398610 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.404269 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.404678 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-kube-api-access-64xch" (OuterVolumeSpecName: "kube-api-access-64xch") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "kube-api-access-64xch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.406954 4909 scope.go:117] "RemoveContainer" containerID="ffdcc38e7deac196a6a6dc47ac259ef1b3c1eaff9265239fbcdfb5425c3fe186" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.407812 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-scripts" (OuterVolumeSpecName: "scripts") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.409354 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance2354-account-delete-qnktx"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.409871 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edba305d-f8e6-4ab0-ae68-30b668037813-kube-api-access-zrbdw" (OuterVolumeSpecName: "kube-api-access-zrbdw") pod "edba305d-f8e6-4ab0-ae68-30b668037813" (UID: "edba305d-f8e6-4ab0-ae68-30b668037813"). InnerVolumeSpecName "kube-api-access-zrbdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.409935 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "edba305d-f8e6-4ab0-ae68-30b668037813" (UID: "edba305d-f8e6-4ab0-ae68-30b668037813"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.410308 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc036cf2-920c-4497-bec8-cbf0d293c33a-kube-api-access-rmj8w" (OuterVolumeSpecName: "kube-api-access-rmj8w") pod "bc036cf2-920c-4497-bec8-cbf0d293c33a" (UID: "bc036cf2-920c-4497-bec8-cbf0d293c33a"). InnerVolumeSpecName "kube-api-access-rmj8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.415172 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance2354-account-delete-qnktx"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.435437 4909 scope.go:117] "RemoveContainer" containerID="bd1ad1420e63a2b12ad289deaed447ee9e8f36e1917943a95d281b6236ae4f9e" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.435552 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1746d8cc-9394-471e-a1c3-5471e65dfc73" (UID: "1746d8cc-9394-471e-a1c3-5471e65dfc73"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.442856 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.486521 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kolla-config\") pod \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.486838 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-memcached-tls-certs\") pod \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487016 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-config-data\") pod \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487123 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-combined-ca-bundle\") pod \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487250 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbj8h\" (UniqueName: \"kubernetes.io/projected/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kube-api-access-sbj8h\") pod \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\" (UID: \"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7\") " Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487655 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64xch\" (UniqueName: \"kubernetes.io/projected/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-kube-api-access-64xch\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487724 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487798 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1746d8cc-9394-471e-a1c3-5471e65dfc73-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487859 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487926 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbdw\" (UniqueName: \"kubernetes.io/projected/edba305d-f8e6-4ab0-ae68-30b668037813-kube-api-access-zrbdw\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.487984 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.488035 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js44f\" (UniqueName: \"kubernetes.io/projected/89513daa-9a0c-4888-9a33-0ba9c007da26-kube-api-access-js44f\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.488107 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.488158 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.488224 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.495849 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmj8w\" (UniqueName: \"kubernetes.io/projected/bc036cf2-920c-4497-bec8-cbf0d293c33a-kube-api-access-rmj8w\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.492380 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" (UID: "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.494138 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-config-data" (OuterVolumeSpecName: "config-data") pod "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" (UID: "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.495802 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.496368 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data podName:37fbb13e-7e2e-451d-af0e-a648c4cde4c2 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:34.496344935 +0000 UTC m=+1326.642556101 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data") pod "rabbitmq-cell1-server-0" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2") : configmap "rabbitmq-cell1-config-data" not found Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.498667 4909 scope.go:117] "RemoveContainer" containerID="af985331eb5f612c15f9ade45f71e902d86e7e0cdc019c0a34c486877d6504c7" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.511289 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kube-api-access-sbj8h" (OuterVolumeSpecName: "kube-api-access-sbj8h") pod "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" (UID: "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7"). InnerVolumeSpecName "kube-api-access-sbj8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.536800 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edba305d-f8e6-4ab0-ae68-30b668037813" (UID: "edba305d-f8e6-4ab0-ae68-30b668037813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.556036 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" path="/var/lib/kubelet/pods/07095ffe-adde-4857-93db-5a02f0adf9e6/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.556889 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b222993-a4da-4936-807a-9e99c637bc27" path="/var/lib/kubelet/pods/0b222993-a4da-4936-807a-9e99c637bc27/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.557523 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" path="/var/lib/kubelet/pods/0fdde234-058b-4e39-a647-b87669d3fda5/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.591766 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.598577 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fe368f-39d5-438d-baf0-4e66700131f4" path="/var/lib/kubelet/pods/24fe368f-39d5-438d-baf0-4e66700131f4/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.599655 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe10693-bf37-4079-8917-cb194290cf6b" path="/var/lib/kubelet/pods/2fe10693-bf37-4079-8917-cb194290cf6b/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.600268 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbj8h\" (UniqueName: \"kubernetes.io/projected/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kube-api-access-sbj8h\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.600304 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.600315 4909 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.600327 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.600550 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7b32b7-50e7-48b0-8027-a6d0c85d6d32" path="/var/lib/kubelet/pods/3c7b32b7-50e7-48b0-8027-a6d0c85d6d32/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.604147 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42945d07-91de-4b60-b6a0-e52dffe51a0d" path="/var/lib/kubelet/pods/42945d07-91de-4b60-b6a0-e52dffe51a0d/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.625837 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.626061 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.630540 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" path="/var/lib/kubelet/pods/6791905e-4b74-417e-bc1b-0747eac5878e/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.631136 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7382debb-3dc4-4849-9109-5d415c6a196f" path="/var/lib/kubelet/pods/7382debb-3dc4-4849-9109-5d415c6a196f/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.631693 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" path="/var/lib/kubelet/pods/76566d98-8a97-4bd6-9a1c-ae8c0eee9d88/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.632611 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793c680c-7448-478e-bbf4-bca888e7e4c9" path="/var/lib/kubelet/pods/793c680c-7448-478e-bbf4-bca888e7e4c9/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.633042 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817235db-6c0f-43f5-8328-6eee7baf5839" path="/var/lib/kubelet/pods/817235db-6c0f-43f5-8328-6eee7baf5839/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.643152 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d42f19-cfd6-4b06-aaf2-8febb4bd3945" path="/var/lib/kubelet/pods/93d42f19-cfd6-4b06-aaf2-8febb4bd3945/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.643627 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944eaf5b-6552-409a-a932-7fceaf182ff7" path="/var/lib/kubelet/pods/944eaf5b-6552-409a-a932-7fceaf182ff7/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.644133 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" path="/var/lib/kubelet/pods/9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.654728 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2de6571-6dd9-40bc-ad9a-59015c568279" path="/var/lib/kubelet/pods/b2de6571-6dd9-40bc-ad9a-59015c568279/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.655467 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba28de5d-9cc0-475b-9eb5-6fce621ca4e2" path="/var/lib/kubelet/pods/ba28de5d-9cc0-475b-9eb5-6fce621ca4e2/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.655959 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" path="/var/lib/kubelet/pods/c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.656482 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d79c0347-3494-4451-83b3-9919dd346f19" path="/var/lib/kubelet/pods/d79c0347-3494-4451-83b3-9919dd346f19/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.657385 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.657456 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="2c6b5670-38ee-4d52-af67-1e187962d73d" containerName="nova-cell1-conductor-conductor" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.657865 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4" path="/var/lib/kubelet/pods/ee5f38e5-ecb6-4284-85d8-a2c93db1cfa4/volumes" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.678810 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.691725 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" (UID: "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.702643 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.702684 4909 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.702696 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.725444 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data" (OuterVolumeSpecName: "config-data") pod "edba305d-f8e6-4ab0-ae68-30b668037813" (UID: "edba305d-f8e6-4ab0-ae68-30b668037813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.731855 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" (UID: "4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.763864 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-config-data" (OuterVolumeSpecName: "config-data") pod "3bdc8ae5-e147-48a9-91d5-1f2425e2b379" (UID: "3bdc8ae5-e147-48a9-91d5-1f2425e2b379"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.804077 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9d9j\" (UniqueName: \"kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j\") pod \"keystone2db1-account-delete-24jqj\" (UID: \"6e5dbbfc-d1fe-4335-b9fd-653192dd45ff\") " pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.804258 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edba305d-f8e6-4ab0-ae68-30b668037813-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.804276 4909 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.804289 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdc8ae5-e147-48a9-91d5-1f2425e2b379-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.807777 4909 projected.go:194] Error preparing data for projected volume kube-api-access-g9d9j for pod openstack/keystone2db1-account-delete-24jqj: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 26 07:22:26 crc kubenswrapper[4909]: E1126 07:22:26.807862 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j podName:6e5dbbfc-d1fe-4335-b9fd-653192dd45ff nodeName:}" failed. No retries permitted until 2025-11-26 07:22:28.807833704 +0000 UTC m=+1320.954044870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-g9d9j" (UniqueName: "kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j") pod "keystone2db1-account-delete-24jqj" (UID: "6e5dbbfc-d1fe-4335-b9fd-653192dd45ff") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.869278 4909 scope.go:117] "RemoveContainer" containerID="85ed4d9b155d719cd9be008a478f8c30dd565960a367abc54ca0b3cdd6d157e2" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.924744 4909 scope.go:117] "RemoveContainer" containerID="c6050544ee612b28a902864fcf0420d3aca003e37b8bf69449abc96dd1260ebc" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.948892 4909 scope.go:117] "RemoveContainer" containerID="346c5a3945f1eaf91c1fcf6d0365d419473a3372fe0402c424978821010165bc" Nov 26 07:22:26 crc kubenswrapper[4909]: I1126 07:22:26.975211 4909 scope.go:117] "RemoveContainer" containerID="9af326558746ae1f5b6fd43ef25bfc03798d1031c68245c1a4e7bd66e604b033" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.006173 4909 scope.go:117] "RemoveContainer" containerID="6e9c74d44de9181ab6e32d80d4b92f4cc0c240f37302b35ea2fea0bdf1f79435" Nov 26 07:22:27 crc kubenswrapper[4909]: E1126 07:22:27.116953 4909 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 26 07:22:27 crc kubenswrapper[4909]: E1126 07:22:27.117047 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data podName:e827f391-2fcb-4758-ae5e-deef3c712e53 nodeName:}" failed. No retries permitted until 2025-11-26 07:22:35.117017048 +0000 UTC m=+1327.263228274 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data") pod "rabbitmq-server-0" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53") : configmap "rabbitmq-config-data" not found Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.136440 4909 scope.go:117] "RemoveContainer" containerID="c4a2a35459e0211d7d702615427fb3cab25f73244a809f8e2c702c0ed29c48f9" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.156351 4909 generic.go:334] "Generic (PLEG): container finished" podID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerID="e851027dc323ea0e4c8353f7e34bc561fcaf6af5e2c46334b9918eabe5ff4a83" exitCode=0 Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.156419 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10d0826f-4316-4c9a-bb8d-542fccd12a08","Type":"ContainerDied","Data":"e851027dc323ea0e4c8353f7e34bc561fcaf6af5e2c46334b9918eabe5ff4a83"} Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.165021 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.165059 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7","Type":"ContainerDied","Data":"a4bb5a99315558850dc3ec94966927cdfa55355fdb705576d52657c717e68051"} Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.170267 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1f85aa19-7a2b-461e-9f33-6ba3f3261da4/ovn-northd/0.log" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.170307 4909 generic.go:334] "Generic (PLEG): container finished" podID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" exitCode=139 Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.170351 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1f85aa19-7a2b-461e-9f33-6ba3f3261da4","Type":"ContainerDied","Data":"c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5"} Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.172113 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi1964-account-delete-vmj4p" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.172992 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.173416 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicana478-account-delete-9mdlb" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.173768 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placemente922-account-delete-ccdmj" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.174099 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone2db1-account-delete-24jqj" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.174399 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8597f74f8-cp26v" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.174811 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.192186 4909 scope.go:117] "RemoveContainer" containerID="930ad0d120558b695b7edf3c8655a0626d86ed2453cbe13f094dc27931f585b5" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.209744 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1f85aa19-7a2b-461e-9f33-6ba3f3261da4/ovn-northd/0.log" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.209838 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.248264 4909 scope.go:117] "RemoveContainer" containerID="7efb83d37c0b27280af33c61d86ae63402ad34c0d0269b1b068ec8e29e729792" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.254111 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-combined-ca-bundle\") pod \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.254221 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-config\") pod \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.254243 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-metrics-certs-tls-certs\") pod \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.254290 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-northd-tls-certs\") pod \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.254356 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-rundir\") pod \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.254399 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlxwr\" (UniqueName: \"kubernetes.io/projected/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-kube-api-access-wlxwr\") pod \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.254439 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-scripts\") pod \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\" (UID: \"1f85aa19-7a2b-461e-9f33-6ba3f3261da4\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.256015 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-scripts" (OuterVolumeSpecName: "scripts") pod "1f85aa19-7a2b-461e-9f33-6ba3f3261da4" (UID: "1f85aa19-7a2b-461e-9f33-6ba3f3261da4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.256835 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-config" (OuterVolumeSpecName: "config") pod "1f85aa19-7a2b-461e-9f33-6ba3f3261da4" (UID: "1f85aa19-7a2b-461e-9f33-6ba3f3261da4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.257777 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "1f85aa19-7a2b-461e-9f33-6ba3f3261da4" (UID: "1f85aa19-7a2b-461e-9f33-6ba3f3261da4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.284847 4909 scope.go:117] "RemoveContainer" containerID="c6f4d8f0f4e61dc21f3c450f4b1e6411451bb293632a6aacfce4bc571716303d" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.287539 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-kube-api-access-wlxwr" (OuterVolumeSpecName: "kube-api-access-wlxwr") pod "1f85aa19-7a2b-461e-9f33-6ba3f3261da4" (UID: "1f85aa19-7a2b-461e-9f33-6ba3f3261da4"). InnerVolumeSpecName "kube-api-access-wlxwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.294944 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.318472 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f85aa19-7a2b-461e-9f33-6ba3f3261da4" (UID: "1f85aa19-7a2b-461e-9f33-6ba3f3261da4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.337830 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.343978 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "1f85aa19-7a2b-461e-9f33-6ba3f3261da4" (UID: "1f85aa19-7a2b-461e-9f33-6ba3f3261da4"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.346922 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi1964-account-delete-vmj4p"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.351672 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi1964-account-delete-vmj4p"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.355790 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicana478-account-delete-9mdlb"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.356796 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.356810 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlxwr\" (UniqueName: \"kubernetes.io/projected/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-kube-api-access-wlxwr\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.356819 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.356829 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.356839 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.356849 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.361772 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbicana478-account-delete-9mdlb"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.365785 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1f85aa19-7a2b-461e-9f33-6ba3f3261da4" (UID: "1f85aa19-7a2b-461e-9f33-6ba3f3261da4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.367363 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.375443 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.383725 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone2db1-account-delete-24jqj"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.389480 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone2db1-account-delete-24jqj"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.397639 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placemente922-account-delete-ccdmj"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.399724 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placemente922-account-delete-ccdmj"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.405520 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8597f74f8-cp26v"] Nov 26 07:22:27 crc kubenswrapper[4909]: E1126 07:22:27.408759 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:22:27 crc kubenswrapper[4909]: E1126 07:22:27.409832 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:22:27 crc kubenswrapper[4909]: E1126 07:22:27.411960 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 07:22:27 crc kubenswrapper[4909]: E1126 07:22:27.412005 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="db82c9cc-8a13-4751-b93c-d5f9452dea67" containerName="nova-scheduler-scheduler" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.413677 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8597f74f8-cp26v"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.423023 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.426957 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.453225 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.458548 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9d9j\" (UniqueName: \"kubernetes.io/projected/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff-kube-api-access-g9d9j\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.458576 4909 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f85aa19-7a2b-461e-9f33-6ba3f3261da4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559641 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-kolla-config\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559673 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-default\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559701 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg9kw\" (UniqueName: \"kubernetes.io/projected/10d0826f-4316-4c9a-bb8d-542fccd12a08-kube-api-access-dg9kw\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559730 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-operator-scripts\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559749 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-combined-ca-bundle\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559799 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-generated\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559843 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559890 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-secrets\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.559909 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-galera-tls-certs\") pod \"10d0826f-4316-4c9a-bb8d-542fccd12a08\" (UID: \"10d0826f-4316-4c9a-bb8d-542fccd12a08\") " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.560308 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.560335 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.560742 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.560953 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.564327 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-secrets" (OuterVolumeSpecName: "secrets") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.580815 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d0826f-4316-4c9a-bb8d-542fccd12a08-kube-api-access-dg9kw" (OuterVolumeSpecName: "kube-api-access-dg9kw") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "kube-api-access-dg9kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.585507 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.604269 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.607712 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "10d0826f-4316-4c9a-bb8d-542fccd12a08" (UID: "10d0826f-4316-4c9a-bb8d-542fccd12a08"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661735 4909 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661792 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661806 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg9kw\" (UniqueName: \"kubernetes.io/projected/10d0826f-4316-4c9a-bb8d-542fccd12a08-kube-api-access-dg9kw\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661816 4909 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d0826f-4316-4c9a-bb8d-542fccd12a08-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661827 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661836 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10d0826f-4316-4c9a-bb8d-542fccd12a08-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661886 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661899 4909 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.661913 4909 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d0826f-4316-4c9a-bb8d-542fccd12a08-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.689950 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 26 07:22:27 crc kubenswrapper[4909]: I1126 07:22:27.764567 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:27.995711 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.070752 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-server-conf\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.070833 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-plugins\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.070940 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-tls\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.070967 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-pod-info\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.071022 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.071058 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-erlang-cookie-secret\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.071104 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2zk8\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-kube-api-access-b2zk8\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.071126 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-plugins-conf\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.071152 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-erlang-cookie\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.071171 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.071206 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-confd\") pod \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\" (UID: \"37fbb13e-7e2e-451d-af0e-a648c4cde4c2\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.072341 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.072966 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.073982 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.077293 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-pod-info" (OuterVolumeSpecName: "pod-info") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.077509 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.077633 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.078244 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-kube-api-access-b2zk8" (OuterVolumeSpecName: "kube-api-access-b2zk8") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "kube-api-access-b2zk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.091892 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.107162 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data" (OuterVolumeSpecName: "config-data") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.128423 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.141151 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-server-conf" (OuterVolumeSpecName: "server-conf") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172156 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-confd\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172469 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-plugins-conf\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172509 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nntdk\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-kube-api-access-nntdk\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172549 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172632 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-erlang-cookie\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172669 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e827f391-2fcb-4758-ae5e-deef3c712e53-pod-info\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172713 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-plugins\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172765 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172788 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e827f391-2fcb-4758-ae5e-deef3c712e53-erlang-cookie-secret\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172813 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-server-conf\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.172864 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-tls\") pod \"e827f391-2fcb-4758-ae5e-deef3c712e53\" (UID: \"e827f391-2fcb-4758-ae5e-deef3c712e53\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173234 4909 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-pod-info\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173253 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173263 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173271 4909 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173280 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2zk8\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-kube-api-access-b2zk8\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173287 4909 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173295 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173314 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173322 4909 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-server-conf\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173330 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173864 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.173877 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.174998 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.178673 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.180161 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-kube-api-access-nntdk" (OuterVolumeSpecName: "kube-api-access-nntdk") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "kube-api-access-nntdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.181888 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.182268 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e827f391-2fcb-4758-ae5e-deef3c712e53-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.187543 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e827f391-2fcb-4758-ae5e-deef3c712e53-pod-info" (OuterVolumeSpecName: "pod-info") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.187654 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.189721 4909 generic.go:334] "Generic (PLEG): container finished" podID="b0ef7a35-86f9-4afc-9529-ff707ba448a9" containerID="c7815d86ff25c599f9e26f760ca58bbfc89cea51769f7eddc87a7472792ccca9" exitCode=0 Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.189810 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8646886cd-cj5pc" event={"ID":"b0ef7a35-86f9-4afc-9529-ff707ba448a9","Type":"ContainerDied","Data":"c7815d86ff25c599f9e26f760ca58bbfc89cea51769f7eddc87a7472792ccca9"} Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.192991 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.194198 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10d0826f-4316-4c9a-bb8d-542fccd12a08","Type":"ContainerDied","Data":"3bdf1053a53ebf6937c8fbe6d8f87b44ba3f8fb90e94d2e76de7e97ed5039dd4"} Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.194246 4909 scope.go:117] "RemoveContainer" containerID="e851027dc323ea0e4c8353f7e34bc561fcaf6af5e2c46334b9918eabe5ff4a83" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.194386 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.201212 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.201294 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0d2c4878-7f21-469c-b19b-c76f335e9e75" containerName="nova-cell0-conductor-conductor" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.205654 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1f85aa19-7a2b-461e-9f33-6ba3f3261da4/ovn-northd/0.log" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.205849 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1f85aa19-7a2b-461e-9f33-6ba3f3261da4","Type":"ContainerDied","Data":"0ed0e2452160ef4ae95ae50c75b48257094cd6d0cbd42b0102d2bb54ef54c6ff"} Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.205892 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.210527 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data" (OuterVolumeSpecName: "config-data") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.217825 4909 generic.go:334] "Generic (PLEG): container finished" podID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerID="a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32" exitCode=0 Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.217983 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.219444 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e827f391-2fcb-4758-ae5e-deef3c712e53","Type":"ContainerDied","Data":"a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32"} Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.219508 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e827f391-2fcb-4758-ae5e-deef3c712e53","Type":"ContainerDied","Data":"8ff3db2f6cd1f90d3907b606bc71de8f11a3adb45789a6e7f610308b2ae7580f"} Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.232302 4909 generic.go:334] "Generic (PLEG): container finished" podID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerID="c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe" exitCode=0 Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.232362 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"37fbb13e-7e2e-451d-af0e-a648c4cde4c2","Type":"ContainerDied","Data":"c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe"} Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.232390 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"37fbb13e-7e2e-451d-af0e-a648c4cde4c2","Type":"ContainerDied","Data":"5308dd167f1c55fd869042c288c3cf397778b1bfef620eb4542966ccb671e15c"} Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.232461 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.252170 4909 scope.go:117] "RemoveContainer" containerID="6a6d4b0e6968ecb97d91448fae9b055603a1b2cd7c5c064b8021fc4fd6cd7dee" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.256274 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.267843 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-server-conf" (OuterVolumeSpecName: "server-conf") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.268775 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "37fbb13e-7e2e-451d-af0e-a648c4cde4c2" (UID: "37fbb13e-7e2e-451d-af0e-a648c4cde4c2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.269048 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.271557 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274409 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274436 4909 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e827f391-2fcb-4758-ae5e-deef3c712e53-pod-info\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274445 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274453 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274461 4909 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e827f391-2fcb-4758-ae5e-deef3c712e53-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274468 4909 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-server-conf\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274475 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274485 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274492 4909 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e827f391-2fcb-4758-ae5e-deef3c712e53-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274502 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37fbb13e-7e2e-451d-af0e-a648c4cde4c2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274512 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nntdk\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-kube-api-access-nntdk\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274537 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.274567 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.279550 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.291919 4909 scope.go:117] "RemoveContainer" containerID="961f74545256d62a34bc75e2a3d148f6d0a38e6f3d41c1cc128a6f4f1eccd8f1" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.301526 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.311242 4909 scope.go:117] "RemoveContainer" containerID="c45a332a215d0d2112e3b34236e6f38a29d0cb0b840b7dcbc7cc8721193190e5" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.334884 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.336015 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e827f391-2fcb-4758-ae5e-deef3c712e53" (UID: "e827f391-2fcb-4758-ae5e-deef3c712e53"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.342266 4909 scope.go:117] "RemoveContainer" containerID="a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.371621 4909 scope.go:117] "RemoveContainer" containerID="3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.375643 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-scripts\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.375771 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-credential-keys\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.375865 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-internal-tls-certs\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.375962 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-combined-ca-bundle\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.376142 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-fernet-keys\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.376257 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-public-tls-certs\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.376324 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqr78\" (UniqueName: \"kubernetes.io/projected/b0ef7a35-86f9-4afc-9529-ff707ba448a9-kube-api-access-lqr78\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.376394 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-config-data\") pod \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\" (UID: \"b0ef7a35-86f9-4afc-9529-ff707ba448a9\") " Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.376760 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e827f391-2fcb-4758-ae5e-deef3c712e53-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.376834 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.381298 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.383601 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-scripts" (OuterVolumeSpecName: "scripts") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.384892 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ef7a35-86f9-4afc-9529-ff707ba448a9-kube-api-access-lqr78" (OuterVolumeSpecName: "kube-api-access-lqr78") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "kube-api-access-lqr78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.386730 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.398303 4909 scope.go:117] "RemoveContainer" containerID="a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32" Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.399993 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32\": container with ID starting with a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32 not found: ID does not exist" containerID="a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.400036 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32"} err="failed to get container status \"a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32\": rpc error: code = NotFound desc = could not find container \"a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32\": container with ID starting with a2a23b6bda1e119d6b4d8a6bc74dd09e0f8c10c8c9c2ac399761caa70bc41f32 not found: ID does not exist" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.400060 4909 scope.go:117] "RemoveContainer" containerID="3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f" Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.400869 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f\": container with ID starting with 3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f not found: ID does not exist" containerID="3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.400895 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f"} err="failed to get container status \"3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f\": rpc error: code = NotFound desc = could not find container \"3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f\": container with ID starting with 3430e12895a78f9b4ee46c0930a7313d436c26f455b9db9ec918158ec76b425f not found: ID does not exist" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.400914 4909 scope.go:117] "RemoveContainer" containerID="c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.409176 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.410081 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-config-data" (OuterVolumeSpecName: "config-data") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.424848 4909 scope.go:117] "RemoveContainer" containerID="d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.433448 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.438226 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0ef7a35-86f9-4afc-9529-ff707ba448a9" (UID: "b0ef7a35-86f9-4afc-9529-ff707ba448a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.446931 4909 scope.go:117] "RemoveContainer" containerID="c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe" Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.448611 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe\": container with ID starting with c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe not found: ID does not exist" containerID="c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.448677 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe"} err="failed to get container status \"c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe\": rpc error: code = NotFound desc = could not find container \"c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe\": container with ID starting with c4cc7cda7eef4863d705b49ca750bfdfc8d6a4d6b502ead43c1543ad9b9606fe not found: ID does not exist" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.448718 4909 scope.go:117] "RemoveContainer" containerID="d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08" Nov 26 07:22:28 crc kubenswrapper[4909]: E1126 07:22:28.449393 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08\": container with ID starting with d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08 not found: ID does not exist" containerID="d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.449442 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08"} err="failed to get container status \"d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08\": rpc error: code = NotFound desc = could not find container \"d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08\": container with ID starting with d1d384120fc3c722dd351842aa2bfffb345fab68d60ea1240360254b7a8b0f08 not found: ID does not exist" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479040 4909 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479077 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479089 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqr78\" (UniqueName: \"kubernetes.io/projected/b0ef7a35-86f9-4afc-9529-ff707ba448a9-kube-api-access-lqr78\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479098 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479105 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479113 4909 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479121 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.479129 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ef7a35-86f9-4afc-9529-ff707ba448a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.519854 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10d0826f-4316-4c9a-bb8d-542fccd12a08" path="/var/lib/kubelet/pods/10d0826f-4316-4c9a-bb8d-542fccd12a08/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.520685 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" path="/var/lib/kubelet/pods/1746d8cc-9394-471e-a1c3-5471e65dfc73/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.521679 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" path="/var/lib/kubelet/pods/1f85aa19-7a2b-461e-9f33-6ba3f3261da4/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.523200 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" path="/var/lib/kubelet/pods/3bdc8ae5-e147-48a9-91d5-1f2425e2b379/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.525518 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" path="/var/lib/kubelet/pods/4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.526890 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e5dbbfc-d1fe-4335-b9fd-653192dd45ff" path="/var/lib/kubelet/pods/6e5dbbfc-d1fe-4335-b9fd-653192dd45ff/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.527209 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89513daa-9a0c-4888-9a33-0ba9c007da26" path="/var/lib/kubelet/pods/89513daa-9a0c-4888-9a33-0ba9c007da26/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.527740 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc036cf2-920c-4497-bec8-cbf0d293c33a" path="/var/lib/kubelet/pods/bc036cf2-920c-4497-bec8-cbf0d293c33a/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.528245 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c468acce-9341-4eff-94c9-f38b74077fdf" path="/var/lib/kubelet/pods/c468acce-9341-4eff-94c9-f38b74077fdf/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.529378 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" path="/var/lib/kubelet/pods/edba305d-f8e6-4ab0-ae68-30b668037813/volumes" Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.619663 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.624513 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.630449 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 07:22:28 crc kubenswrapper[4909]: I1126 07:22:28.637906 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.250516 4909 generic.go:334] "Generic (PLEG): container finished" podID="2c6b5670-38ee-4d52-af67-1e187962d73d" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" exitCode=0 Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.251035 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2c6b5670-38ee-4d52-af67-1e187962d73d","Type":"ContainerDied","Data":"2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4"} Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.274722 4909 generic.go:334] "Generic (PLEG): container finished" podID="0d2c4878-7f21-469c-b19b-c76f335e9e75" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" exitCode=0 Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.274810 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0d2c4878-7f21-469c-b19b-c76f335e9e75","Type":"ContainerDied","Data":"000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc"} Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.277979 4909 generic.go:334] "Generic (PLEG): container finished" podID="db82c9cc-8a13-4751-b93c-d5f9452dea67" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" exitCode=0 Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.278017 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db82c9cc-8a13-4751-b93c-d5f9452dea67","Type":"ContainerDied","Data":"0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9"} Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.291941 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8646886cd-cj5pc" event={"ID":"b0ef7a35-86f9-4afc-9529-ff707ba448a9","Type":"ContainerDied","Data":"e46dd967ada859c733ba0187dec7a10556ed4153b68da348e27a1716bf2ac61f"} Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.291995 4909 scope.go:117] "RemoveContainer" containerID="c7815d86ff25c599f9e26f760ca58bbfc89cea51769f7eddc87a7472792ccca9" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.292150 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8646886cd-cj5pc" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.340852 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.360268 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8646886cd-cj5pc"] Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.365286 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8646886cd-cj5pc"] Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.403273 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-config-data\") pod \"db82c9cc-8a13-4751-b93c-d5f9452dea67\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.403438 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br5ll\" (UniqueName: \"kubernetes.io/projected/db82c9cc-8a13-4751-b93c-d5f9452dea67-kube-api-access-br5ll\") pod \"db82c9cc-8a13-4751-b93c-d5f9452dea67\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.403469 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-combined-ca-bundle\") pod \"db82c9cc-8a13-4751-b93c-d5f9452dea67\" (UID: \"db82c9cc-8a13-4751-b93c-d5f9452dea67\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.437537 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db82c9cc-8a13-4751-b93c-d5f9452dea67-kube-api-access-br5ll" (OuterVolumeSpecName: "kube-api-access-br5ll") pod "db82c9cc-8a13-4751-b93c-d5f9452dea67" (UID: "db82c9cc-8a13-4751-b93c-d5f9452dea67"). InnerVolumeSpecName "kube-api-access-br5ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.441184 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db82c9cc-8a13-4751-b93c-d5f9452dea67" (UID: "db82c9cc-8a13-4751-b93c-d5f9452dea67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.452712 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.460959 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-config-data" (OuterVolumeSpecName: "config-data") pod "db82c9cc-8a13-4751-b93c-d5f9452dea67" (UID: "db82c9cc-8a13-4751-b93c-d5f9452dea67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.504530 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-combined-ca-bundle\") pod \"0d2c4878-7f21-469c-b19b-c76f335e9e75\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.504633 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-config-data\") pod \"0d2c4878-7f21-469c-b19b-c76f335e9e75\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.504671 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvn6h\" (UniqueName: \"kubernetes.io/projected/0d2c4878-7f21-469c-b19b-c76f335e9e75-kube-api-access-jvn6h\") pod \"0d2c4878-7f21-469c-b19b-c76f335e9e75\" (UID: \"0d2c4878-7f21-469c-b19b-c76f335e9e75\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.504979 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br5ll\" (UniqueName: \"kubernetes.io/projected/db82c9cc-8a13-4751-b93c-d5f9452dea67-kube-api-access-br5ll\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.504991 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.505001 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db82c9cc-8a13-4751-b93c-d5f9452dea67-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.508116 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2c4878-7f21-469c-b19b-c76f335e9e75-kube-api-access-jvn6h" (OuterVolumeSpecName: "kube-api-access-jvn6h") pod "0d2c4878-7f21-469c-b19b-c76f335e9e75" (UID: "0d2c4878-7f21-469c-b19b-c76f335e9e75"). InnerVolumeSpecName "kube-api-access-jvn6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.526361 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d2c4878-7f21-469c-b19b-c76f335e9e75" (UID: "0d2c4878-7f21-469c-b19b-c76f335e9e75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.526965 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-config-data" (OuterVolumeSpecName: "config-data") pod "0d2c4878-7f21-469c-b19b-c76f335e9e75" (UID: "0d2c4878-7f21-469c-b19b-c76f335e9e75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.565843 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.605686 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-combined-ca-bundle\") pod \"2c6b5670-38ee-4d52-af67-1e187962d73d\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.605758 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzmbf\" (UniqueName: \"kubernetes.io/projected/2c6b5670-38ee-4d52-af67-1e187962d73d-kube-api-access-xzmbf\") pod \"2c6b5670-38ee-4d52-af67-1e187962d73d\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.605828 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-config-data\") pod \"2c6b5670-38ee-4d52-af67-1e187962d73d\" (UID: \"2c6b5670-38ee-4d52-af67-1e187962d73d\") " Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.606181 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.606197 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvn6h\" (UniqueName: \"kubernetes.io/projected/0d2c4878-7f21-469c-b19b-c76f335e9e75-kube-api-access-jvn6h\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.606218 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2c4878-7f21-469c-b19b-c76f335e9e75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.615947 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6b5670-38ee-4d52-af67-1e187962d73d-kube-api-access-xzmbf" (OuterVolumeSpecName: "kube-api-access-xzmbf") pod "2c6b5670-38ee-4d52-af67-1e187962d73d" (UID: "2c6b5670-38ee-4d52-af67-1e187962d73d"). InnerVolumeSpecName "kube-api-access-xzmbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.628801 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c6b5670-38ee-4d52-af67-1e187962d73d" (UID: "2c6b5670-38ee-4d52-af67-1e187962d73d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.636135 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-config-data" (OuterVolumeSpecName: "config-data") pod "2c6b5670-38ee-4d52-af67-1e187962d73d" (UID: "2c6b5670-38ee-4d52-af67-1e187962d73d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.707673 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.707707 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b5670-38ee-4d52-af67-1e187962d73d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: I1126 07:22:29.707723 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzmbf\" (UniqueName: \"kubernetes.io/projected/2c6b5670-38ee-4d52-af67-1e187962d73d-kube-api-access-xzmbf\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.793123 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.793490 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.793742 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.793775 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.794898 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.796733 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.798533 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:29 crc kubenswrapper[4909]: E1126 07:22:29.798566 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.221113 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.313978 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2c6b5670-38ee-4d52-af67-1e187962d73d","Type":"ContainerDied","Data":"dd9a82bbcf39c1326fd29560bf5228a4882b5b65ff10e9309b763382bf5c3797"} Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.314018 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.314037 4909 scope.go:117] "RemoveContainer" containerID="2f6a9a868f36816e8779d6bd9b8ec2e106d2790ceb880151f8a96ae57bf045a4" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.315696 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-httpd-config\") pod \"978782ca-c440-4bb1-9516-30115aa4a0b2\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.315755 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-config\") pod \"978782ca-c440-4bb1-9516-30115aa4a0b2\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.315841 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-ovndb-tls-certs\") pod \"978782ca-c440-4bb1-9516-30115aa4a0b2\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.315877 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-public-tls-certs\") pod \"978782ca-c440-4bb1-9516-30115aa4a0b2\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.315921 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpg4x\" (UniqueName: \"kubernetes.io/projected/978782ca-c440-4bb1-9516-30115aa4a0b2-kube-api-access-qpg4x\") pod \"978782ca-c440-4bb1-9516-30115aa4a0b2\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.315954 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-combined-ca-bundle\") pod \"978782ca-c440-4bb1-9516-30115aa4a0b2\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.316037 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-internal-tls-certs\") pod \"978782ca-c440-4bb1-9516-30115aa4a0b2\" (UID: \"978782ca-c440-4bb1-9516-30115aa4a0b2\") " Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.320643 4909 generic.go:334] "Generic (PLEG): container finished" podID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerID="0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa" exitCode=0 Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.320771 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74f9bb65df-qpbtq" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.320941 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74f9bb65df-qpbtq" event={"ID":"978782ca-c440-4bb1-9516-30115aa4a0b2","Type":"ContainerDied","Data":"0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa"} Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.320992 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978782ca-c440-4bb1-9516-30115aa4a0b2-kube-api-access-qpg4x" (OuterVolumeSpecName: "kube-api-access-qpg4x") pod "978782ca-c440-4bb1-9516-30115aa4a0b2" (UID: "978782ca-c440-4bb1-9516-30115aa4a0b2"). InnerVolumeSpecName "kube-api-access-qpg4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.321016 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74f9bb65df-qpbtq" event={"ID":"978782ca-c440-4bb1-9516-30115aa4a0b2","Type":"ContainerDied","Data":"3a6a15c713f0cc34415fc9c44e4afdcfdc34a64cb3eef097b53ad3abb65f18e1"} Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.321704 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "978782ca-c440-4bb1-9516-30115aa4a0b2" (UID: "978782ca-c440-4bb1-9516-30115aa4a0b2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.324987 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db82c9cc-8a13-4751-b93c-d5f9452dea67","Type":"ContainerDied","Data":"234b5a0c0f0caeb956d7e7919ed5a67c88645d77db6b15af01ce4bf55ed861e9"} Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.325072 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.327207 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0d2c4878-7f21-469c-b19b-c76f335e9e75","Type":"ContainerDied","Data":"a0b6335d54b72bb27be284681cef069817c3a32b984a7c558722b4bb97e568bb"} Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.327296 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.339124 4909 scope.go:117] "RemoveContainer" containerID="76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.354217 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8597f74f8-cp26v" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": context deadline exceeded" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.354520 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8597f74f8-cp26v" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: i/o timeout" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.370367 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.377550 4909 scope.go:117] "RemoveContainer" containerID="0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.380046 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "978782ca-c440-4bb1-9516-30115aa4a0b2" (UID: "978782ca-c440-4bb1-9516-30115aa4a0b2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.382124 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "978782ca-c440-4bb1-9516-30115aa4a0b2" (UID: "978782ca-c440-4bb1-9516-30115aa4a0b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.387646 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.393408 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "978782ca-c440-4bb1-9516-30115aa4a0b2" (UID: "978782ca-c440-4bb1-9516-30115aa4a0b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.395391 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-config" (OuterVolumeSpecName: "config") pod "978782ca-c440-4bb1-9516-30115aa4a0b2" (UID: "978782ca-c440-4bb1-9516-30115aa4a0b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.401483 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.404569 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "978782ca-c440-4bb1-9516-30115aa4a0b2" (UID: "978782ca-c440-4bb1-9516-30115aa4a0b2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.412503 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419700 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpg4x\" (UniqueName: \"kubernetes.io/projected/978782ca-c440-4bb1-9516-30115aa4a0b2-kube-api-access-qpg4x\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419793 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419813 4909 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419826 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419840 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-config\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419850 4909 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419861 4909 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978782ca-c440-4bb1-9516-30115aa4a0b2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.419890 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.421813 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.428820 4909 scope.go:117] "RemoveContainer" containerID="76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876" Nov 26 07:22:30 crc kubenswrapper[4909]: E1126 07:22:30.429319 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876\": container with ID starting with 76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876 not found: ID does not exist" containerID="76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.429360 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876"} err="failed to get container status \"76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876\": rpc error: code = NotFound desc = could not find container \"76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876\": container with ID starting with 76433d410b3bdb9ac0bd74e594c5f6a1910da3ee19c74de567bb7d2f5efa0876 not found: ID does not exist" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.429389 4909 scope.go:117] "RemoveContainer" containerID="0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa" Nov 26 07:22:30 crc kubenswrapper[4909]: E1126 07:22:30.429769 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa\": container with ID starting with 0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa not found: ID does not exist" containerID="0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.429810 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa"} err="failed to get container status \"0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa\": rpc error: code = NotFound desc = could not find container \"0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa\": container with ID starting with 0ba402bdc40f70142608db90bc1c05dfb4969d4dc5787c16dd958c837d9c2eaa not found: ID does not exist" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.429839 4909 scope.go:117] "RemoveContainer" containerID="0b5daeedd458f13616c9700a107ce6438a90f188ad69b81821639742af27e6e9" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.454061 4909 scope.go:117] "RemoveContainer" containerID="000a88ca7c5406a86dd0230b6355446065c541ff1b5dd6796e96d8e7b58a4adc" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.509655 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2c4878-7f21-469c-b19b-c76f335e9e75" path="/var/lib/kubelet/pods/0d2c4878-7f21-469c-b19b-c76f335e9e75/volumes" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.510201 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6b5670-38ee-4d52-af67-1e187962d73d" path="/var/lib/kubelet/pods/2c6b5670-38ee-4d52-af67-1e187962d73d/volumes" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.511927 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" path="/var/lib/kubelet/pods/37fbb13e-7e2e-451d-af0e-a648c4cde4c2/volumes" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.513187 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ef7a35-86f9-4afc-9529-ff707ba448a9" path="/var/lib/kubelet/pods/b0ef7a35-86f9-4afc-9529-ff707ba448a9/volumes" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.513816 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db82c9cc-8a13-4751-b93c-d5f9452dea67" path="/var/lib/kubelet/pods/db82c9cc-8a13-4751-b93c-d5f9452dea67/volumes" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.514492 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e827f391-2fcb-4758-ae5e-deef3c712e53" path="/var/lib/kubelet/pods/e827f391-2fcb-4758-ae5e-deef3c712e53/volumes" Nov 26 07:22:30 crc kubenswrapper[4909]: E1126 07:22:30.589458 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod978782ca_c440_4bb1_9516_30115aa4a0b2.slice\": RecentStats: unable to find data in memory cache]" Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.640546 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74f9bb65df-qpbtq"] Nov 26 07:22:30 crc kubenswrapper[4909]: I1126 07:22:30.644823 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74f9bb65df-qpbtq"] Nov 26 07:22:32 crc kubenswrapper[4909]: I1126 07:22:32.512132 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" path="/var/lib/kubelet/pods/978782ca-c440-4bb1-9516-30115aa4a0b2/volumes" Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.792540 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.794101 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.794510 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.794716 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.795287 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.797420 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.799288 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:34 crc kubenswrapper[4909]: E1126 07:22:34.799375 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:22:37 crc kubenswrapper[4909]: I1126 07:22:37.301295 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:22:37 crc kubenswrapper[4909]: I1126 07:22:37.301376 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.792859 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.793606 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.793799 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.793869 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.793893 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.795265 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.797189 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:39 crc kubenswrapper[4909]: E1126 07:22:39.797241 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.793359 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.795164 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.795920 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.795963 4909 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.796107 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.797991 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.799364 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 26 07:22:44 crc kubenswrapper[4909]: E1126 07:22:44.799405 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5f8k9" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.291505 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5f8k9_b793112e-ecec-4fb1-b06a-3bf4245af24b/ovs-vswitchd/0.log" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.293046 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463089 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "b793112e-ecec-4fb1-b06a-3bf4245af24b" (UID: "b793112e-ecec-4fb1-b06a-3bf4245af24b"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463158 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-etc-ovs\") pod \"b793112e-ecec-4fb1-b06a-3bf4245af24b\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463227 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-lib\") pod \"b793112e-ecec-4fb1-b06a-3bf4245af24b\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463296 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-lib" (OuterVolumeSpecName: "var-lib") pod "b793112e-ecec-4fb1-b06a-3bf4245af24b" (UID: "b793112e-ecec-4fb1-b06a-3bf4245af24b"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463366 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-log\") pod \"b793112e-ecec-4fb1-b06a-3bf4245af24b\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463480 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-log" (OuterVolumeSpecName: "var-log") pod "b793112e-ecec-4fb1-b06a-3bf4245af24b" (UID: "b793112e-ecec-4fb1-b06a-3bf4245af24b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463637 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-run\") pod \"b793112e-ecec-4fb1-b06a-3bf4245af24b\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463711 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-run" (OuterVolumeSpecName: "var-run") pod "b793112e-ecec-4fb1-b06a-3bf4245af24b" (UID: "b793112e-ecec-4fb1-b06a-3bf4245af24b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.463728 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b793112e-ecec-4fb1-b06a-3bf4245af24b-scripts\") pod \"b793112e-ecec-4fb1-b06a-3bf4245af24b\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.465747 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b793112e-ecec-4fb1-b06a-3bf4245af24b-scripts" (OuterVolumeSpecName: "scripts") pod "b793112e-ecec-4fb1-b06a-3bf4245af24b" (UID: "b793112e-ecec-4fb1-b06a-3bf4245af24b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.465877 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbkg\" (UniqueName: \"kubernetes.io/projected/b793112e-ecec-4fb1-b06a-3bf4245af24b-kube-api-access-qqbkg\") pod \"b793112e-ecec-4fb1-b06a-3bf4245af24b\" (UID: \"b793112e-ecec-4fb1-b06a-3bf4245af24b\") " Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.466559 4909 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-run\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.466627 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b793112e-ecec-4fb1-b06a-3bf4245af24b-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.466648 4909 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.466667 4909 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-lib\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.466683 4909 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b793112e-ecec-4fb1-b06a-3bf4245af24b-var-log\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.475504 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b793112e-ecec-4fb1-b06a-3bf4245af24b-kube-api-access-qqbkg" (OuterVolumeSpecName: "kube-api-access-qqbkg") pod "b793112e-ecec-4fb1-b06a-3bf4245af24b" (UID: "b793112e-ecec-4fb1-b06a-3bf4245af24b"). InnerVolumeSpecName "kube-api-access-qqbkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.533169 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5f8k9_b793112e-ecec-4fb1-b06a-3bf4245af24b/ovs-vswitchd/0.log" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.534333 4909 generic.go:334] "Generic (PLEG): container finished" podID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" exitCode=137 Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.534375 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5f8k9" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.534400 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerDied","Data":"fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890"} Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.534450 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5f8k9" event={"ID":"b793112e-ecec-4fb1-b06a-3bf4245af24b","Type":"ContainerDied","Data":"db7ec0671549e28556b79ea5289a1dd1c7d199414a38fc0ad2681a93a85b9aae"} Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.534479 4909 scope.go:117] "RemoveContainer" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.567432 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-5f8k9"] Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.569801 4909 scope.go:117] "RemoveContainer" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.570051 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqbkg\" (UniqueName: \"kubernetes.io/projected/b793112e-ecec-4fb1-b06a-3bf4245af24b-kube-api-access-qqbkg\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.572928 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-5f8k9"] Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.593863 4909 scope.go:117] "RemoveContainer" containerID="a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.623371 4909 scope.go:117] "RemoveContainer" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" Nov 26 07:22:49 crc kubenswrapper[4909]: E1126 07:22:49.624143 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890\": container with ID starting with fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890 not found: ID does not exist" containerID="fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.624219 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890"} err="failed to get container status \"fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890\": rpc error: code = NotFound desc = could not find container \"fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890\": container with ID starting with fa122da77aa5fa63c9f2691a47736751b886834835ae0d5d11d47c71547e5890 not found: ID does not exist" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.624264 4909 scope.go:117] "RemoveContainer" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" Nov 26 07:22:49 crc kubenswrapper[4909]: E1126 07:22:49.624821 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a\": container with ID starting with 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a not found: ID does not exist" containerID="2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.624879 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a"} err="failed to get container status \"2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a\": rpc error: code = NotFound desc = could not find container \"2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a\": container with ID starting with 2a5e22e37147b78333e38a9d3c65c79e75a2ec5a7449edbb6742be34f7e86b5a not found: ID does not exist" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.624921 4909 scope.go:117] "RemoveContainer" containerID="a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97" Nov 26 07:22:49 crc kubenswrapper[4909]: E1126 07:22:49.625300 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97\": container with ID starting with a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97 not found: ID does not exist" containerID="a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97" Nov 26 07:22:49 crc kubenswrapper[4909]: I1126 07:22:49.625350 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97"} err="failed to get container status \"a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97\": rpc error: code = NotFound desc = could not find container \"a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97\": container with ID starting with a3d21adfb9fe31f1f0b18c57457d37fd3642e100459610d82ef350ea37e72f97 not found: ID does not exist" Nov 26 07:22:50 crc kubenswrapper[4909]: I1126 07:22:50.516641 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" path="/var/lib/kubelet/pods/b793112e-ecec-4fb1-b06a-3bf4245af24b/volumes" Nov 26 07:22:50 crc kubenswrapper[4909]: I1126 07:22:50.562556 4909 generic.go:334] "Generic (PLEG): container finished" podID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerID="1b1f9c5a8d3224d9a8311e314bf6dc4a0fbdc6f393e2987d8955a46b68d1bada" exitCode=137 Nov 26 07:22:50 crc kubenswrapper[4909]: I1126 07:22:50.562688 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"1b1f9c5a8d3224d9a8311e314bf6dc4a0fbdc6f393e2987d8955a46b68d1bada"} Nov 26 07:22:50 crc kubenswrapper[4909]: I1126 07:22:50.897542 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.089681 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-cache\") pod \"93f8db39-0460-4b6a-89fe-0e9bb565462e\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.089843 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") pod \"93f8db39-0460-4b6a-89fe-0e9bb565462e\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.089875 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-lock\") pod \"93f8db39-0460-4b6a-89fe-0e9bb565462e\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.089926 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"93f8db39-0460-4b6a-89fe-0e9bb565462e\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.089974 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kmz\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-kube-api-access-d5kmz\") pod \"93f8db39-0460-4b6a-89fe-0e9bb565462e\" (UID: \"93f8db39-0460-4b6a-89fe-0e9bb565462e\") " Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.090408 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-cache" (OuterVolumeSpecName: "cache") pod "93f8db39-0460-4b6a-89fe-0e9bb565462e" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.090498 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-lock" (OuterVolumeSpecName: "lock") pod "93f8db39-0460-4b6a-89fe-0e9bb565462e" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.094949 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "93f8db39-0460-4b6a-89fe-0e9bb565462e" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.095827 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-kube-api-access-d5kmz" (OuterVolumeSpecName: "kube-api-access-d5kmz") pod "93f8db39-0460-4b6a-89fe-0e9bb565462e" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e"). InnerVolumeSpecName "kube-api-access-d5kmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.095999 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "93f8db39-0460-4b6a-89fe-0e9bb565462e" (UID: "93f8db39-0460-4b6a-89fe-0e9bb565462e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.191649 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.191703 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kmz\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-kube-api-access-d5kmz\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.191720 4909 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-cache\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.191732 4909 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/93f8db39-0460-4b6a-89fe-0e9bb565462e-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.191747 4909 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/93f8db39-0460-4b6a-89fe-0e9bb565462e-lock\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.213276 4909 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.292855 4909 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.582349 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"93f8db39-0460-4b6a-89fe-0e9bb565462e","Type":"ContainerDied","Data":"2a66b364fe99e0d8692030d5621c03464a2d52ffe679e4b466236a36cf795de3"} Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.582513 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.583819 4909 scope.go:117] "RemoveContainer" containerID="1b1f9c5a8d3224d9a8311e314bf6dc4a0fbdc6f393e2987d8955a46b68d1bada" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.618090 4909 scope.go:117] "RemoveContainer" containerID="96997ae8444f96d36126a818d42e9ce0882a0ec678fa1686cadf36da925626d7" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.640207 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.646125 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.649091 4909 scope.go:117] "RemoveContainer" containerID="07c32dca92ef9af6a5b2f1da9964db33a8d49c3a4d846c0cb66461ab457f596f" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.671250 4909 scope.go:117] "RemoveContainer" containerID="deb5869801f78aa72238df2b9719a9337500c7d4fe3cef9fd57bfea3f27a9500" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.697071 4909 scope.go:117] "RemoveContainer" containerID="93d0e136e4522423ec6013c050a8ff1959c79f2b6857b7223d3792246312b6bd" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.724605 4909 scope.go:117] "RemoveContainer" containerID="e0d087da0faef2436ea0b5dc36389de6f9bcae11c0745372234e7e2e2515dc1e" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.750878 4909 scope.go:117] "RemoveContainer" containerID="ae51b3e0f8704221eb8fa99538d9b20411e525c3d485412522af25ca33ee293d" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.783111 4909 scope.go:117] "RemoveContainer" containerID="12254f31c6a379da5fd4e34c45fd68057888fa099c912fe12dd9c1a881206bdf" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.806469 4909 scope.go:117] "RemoveContainer" containerID="d57d935982096fce0c90d166aa9755252570903363ed795caa7ea306a1c4a125" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.823088 4909 scope.go:117] "RemoveContainer" containerID="5d4ff632621d60ecaadd162fdb8816be897785eaad8d97513f60206f89fa1487" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.841345 4909 scope.go:117] "RemoveContainer" containerID="c76f25e43175f3d693010c16bd1b421da9f361eea4704ff1766122084490d5d8" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.862616 4909 scope.go:117] "RemoveContainer" containerID="a449d7cd0e0553480c704885c8e18a406ff461623be069faf59ed385c2a89148" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.883643 4909 scope.go:117] "RemoveContainer" containerID="ef85ba50ad3703e23f7fcb4391c0f594c7dc9bc10c9b5ed2ff4ec5998223f89c" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.907125 4909 scope.go:117] "RemoveContainer" containerID="d349b9ce563e6e2048f46f3884eca2d8e3ba6436ecab095b55cfbdff47ed90e8" Nov 26 07:22:51 crc kubenswrapper[4909]: I1126 07:22:51.929322 4909 scope.go:117] "RemoveContainer" containerID="dced4a3ee055a4cc6d79d52944605e70abd5ed1457b4c96ba7b9b9ae67562306" Nov 26 07:22:52 crc kubenswrapper[4909]: I1126 07:22:52.512052 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" path="/var/lib/kubelet/pods/93f8db39-0460-4b6a-89fe-0e9bb565462e/volumes" Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.300744 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.301467 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.301545 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.302733 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01a4d185a8d7c30690fef08cf37e1461869ce637ebe4ac1e55eebb9783625426"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.302825 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://01a4d185a8d7c30690fef08cf37e1461869ce637ebe4ac1e55eebb9783625426" gracePeriod=600 Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.748078 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="01a4d185a8d7c30690fef08cf37e1461869ce637ebe4ac1e55eebb9783625426" exitCode=0 Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.748167 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"01a4d185a8d7c30690fef08cf37e1461869ce637ebe4ac1e55eebb9783625426"} Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.748775 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246"} Nov 26 07:23:07 crc kubenswrapper[4909]: I1126 07:23:07.748825 4909 scope.go:117] "RemoveContainer" containerID="ca3ef3a41105acb11f5f44a2c705bdfdb176056de55b77bbb72e7524ee7071fd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.177244 4909 scope.go:117] "RemoveContainer" containerID="4a780ea8cca99f8ab8ddd348936ba1a52963bb54194922a0b0e32fc27859e497" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.207194 4909 scope.go:117] "RemoveContainer" containerID="3b60bdf9d2f27f1a4462ec1b693a6f574de16cc4ac333faddad603ec240eb169" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.875333 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.875911 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerName="setup-container" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.875929 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerName="setup-container" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.875943 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" containerName="memcached" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.875950 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" containerName="memcached" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.875966 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-updater" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.875973 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-updater" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.875984 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b222993-a4da-4936-807a-9e99c637bc27" containerName="kube-state-metrics" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.875994 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b222993-a4da-4936-807a-9e99c637bc27" containerName="kube-state-metrics" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876006 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876013 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876026 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerName="galera" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876032 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerName="galera" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876047 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="rsync" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876054 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="rsync" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876061 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db82c9cc-8a13-4751-b93c-d5f9452dea67" containerName="nova-scheduler-scheduler" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876068 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="db82c9cc-8a13-4751-b93c-d5f9452dea67" containerName="nova-scheduler-scheduler" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876082 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="probe" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876089 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="probe" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876097 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-central-agent" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876103 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-central-agent" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876113 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876120 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-log" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876132 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876139 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-server" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876152 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876160 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876174 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="cinder-scheduler" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876181 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="cinder-scheduler" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876194 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876201 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876212 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876221 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-api" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876237 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876245 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876257 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876265 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-log" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876274 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89513daa-9a0c-4888-9a33-0ba9c007da26" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876281 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="89513daa-9a0c-4888-9a33-0ba9c007da26" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876290 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-reaper" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876297 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-reaper" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876306 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876312 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-api" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876319 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="openstack-network-exporter" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876327 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="openstack-network-exporter" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876335 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876340 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876349 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2c4878-7f21-469c-b19b-c76f335e9e75" containerName="nova-cell0-conductor-conductor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876354 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2c4878-7f21-469c-b19b-c76f335e9e75" containerName="nova-cell0-conductor-conductor" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876366 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876372 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-log" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876378 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server-init" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876383 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server-init" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876392 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876397 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876405 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876411 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-server" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876419 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b5670-38ee-4d52-af67-1e187962d73d" containerName="nova-cell1-conductor-conductor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876424 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b5670-38ee-4d52-af67-1e187962d73d" containerName="nova-cell1-conductor-conductor" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876431 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerName="setup-container" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876437 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerName="setup-container" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876443 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876448 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876454 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc036cf2-920c-4497-bec8-cbf0d293c33a" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876460 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc036cf2-920c-4497-bec8-cbf0d293c33a" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876467 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876473 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876481 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876486 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876499 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="swift-recon-cron" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876504 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="swift-recon-cron" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876511 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876516 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-log" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876524 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876531 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876539 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-metadata" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876545 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-metadata" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876552 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876558 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876565 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c468acce-9341-4eff-94c9-f38b74077fdf" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876570 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c468acce-9341-4eff-94c9-f38b74077fdf" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876579 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerName="mysql-bootstrap" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876646 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerName="mysql-bootstrap" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876658 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="proxy-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876664 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="proxy-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876672 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876678 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876688 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876693 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876702 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876708 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876716 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-expirer" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876721 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-expirer" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876731 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876737 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-api" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876746 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerName="rabbitmq" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876751 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerName="rabbitmq" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876760 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876766 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876775 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876780 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876787 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="ovn-northd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876792 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="ovn-northd" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876802 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerName="rabbitmq" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876807 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerName="rabbitmq" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876813 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-updater" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876818 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-updater" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876824 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d42f19-cfd6-4b06-aaf2-8febb4bd3945" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876830 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d42f19-cfd6-4b06-aaf2-8febb4bd3945" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876840 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876846 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-server" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876853 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="sg-core" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876858 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="sg-core" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876865 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ef7a35-86f9-4afc-9529-ff707ba448a9" containerName="keystone-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876870 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ef7a35-86f9-4afc-9529-ff707ba448a9" containerName="keystone-api" Nov 26 07:23:32 crc kubenswrapper[4909]: E1126 07:23:32.876876 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-notification-agent" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.876882 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-notification-agent" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877027 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="89513daa-9a0c-4888-9a33-0ba9c007da26" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877036 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b5670-38ee-4d52-af67-1e187962d73d" containerName="nova-cell1-conductor-conductor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877046 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877057 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-reaper" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877067 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877075 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877081 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="proxy-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877090 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovsdb-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877096 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877103 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877113 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="ovn-northd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877120 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="swift-recon-cron" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877126 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877133 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-server" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877143 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="probe" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877149 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-updater" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877158 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877167 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877175 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b222993-a4da-4936-807a-9e99c637bc27" containerName="kube-state-metrics" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877188 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7fdd4d-2a2a-443d-a3b2-789f93b75fdd" containerName="placement-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877198 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877207 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdde234-058b-4e39-a647-b87669d3fda5" containerName="glance-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877214 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d0826f-4316-4c9a-bb8d-542fccd12a08" containerName="galera" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877240 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9bdd84-9798-4dc8-8fc7-e8dda24b12c7" containerName="memcached" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877248 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877256 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-central-agent" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877263 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="rsync" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877270 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="edba305d-f8e6-4ab0-ae68-30b668037813" containerName="cinder-scheduler" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877278 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877284 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f85aa19-7a2b-461e-9f33-6ba3f3261da4" containerName="openstack-network-exporter" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877292 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6791905e-4b74-417e-bc1b-0747eac5878e" containerName="glance-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877299 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d42f19-cfd6-4b06-aaf2-8febb4bd3945" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877305 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fbb13e-7e2e-451d-af0e-a648c4cde4c2" containerName="rabbitmq" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877313 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e827f391-2fcb-4758-ae5e-deef3c712e53" containerName="rabbitmq" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877319 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2c4878-7f21-469c-b19b-c76f335e9e75" containerName="nova-cell0-conductor-conductor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877325 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877331 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="76566d98-8a97-4bd6-9a1c-ae8c0eee9d88" containerName="nova-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877341 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b793112e-ecec-4fb1-b06a-3bf4245af24b" containerName="ovs-vswitchd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877347 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="container-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877356 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-updater" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877365 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc036cf2-920c-4497-bec8-cbf0d293c33a" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877372 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c468acce-9341-4eff-94c9-f38b74077fdf" containerName="mariadb-account-delete" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877380 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="object-expirer" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877387 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="db82c9cc-8a13-4751-b93c-d5f9452dea67" containerName="nova-scheduler-scheduler" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877395 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-auditor" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877401 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="978782ca-c440-4bb1-9516-30115aa4a0b2" containerName="neutron-httpd" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877410 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877417 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="07095ffe-adde-4857-93db-5a02f0adf9e6" containerName="cinder-api-log" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877425 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="ceilometer-notification-agent" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877432 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ef7a35-86f9-4afc-9529-ff707ba448a9" containerName="keystone-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877441 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8db39-0460-4b6a-89fe-0e9bb565462e" containerName="account-replicator" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877449 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1746d8cc-9394-471e-a1c3-5471e65dfc73" containerName="barbican-api" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877457 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdc8ae5-e147-48a9-91d5-1f2425e2b379" containerName="sg-core" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877466 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e24ab8-f914-4ad5-8e82-e1e30e0d5b62" containerName="nova-metadata-metadata" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.877951 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.883914 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.884229 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.897083 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.955428 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:32 crc kubenswrapper[4909]: I1126 07:23:32.955473 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:33 crc kubenswrapper[4909]: I1126 07:23:33.056882 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:33 crc kubenswrapper[4909]: I1126 07:23:33.056939 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:33 crc kubenswrapper[4909]: I1126 07:23:33.057027 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:33 crc kubenswrapper[4909]: I1126 07:23:33.080913 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:33 crc kubenswrapper[4909]: I1126 07:23:33.213036 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:33 crc kubenswrapper[4909]: I1126 07:23:33.712154 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 07:23:34 crc kubenswrapper[4909]: I1126 07:23:34.008754 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9c706f70-09e6-4c92-b656-aaa6b5de2ecd","Type":"ContainerStarted","Data":"d2731f5cb6ece6e7fd6946a2627593d2a713bd2161e95a9702f86ba550706647"} Nov 26 07:23:35 crc kubenswrapper[4909]: I1126 07:23:35.023333 4909 generic.go:334] "Generic (PLEG): container finished" podID="9c706f70-09e6-4c92-b656-aaa6b5de2ecd" containerID="9e344311001948682e34ef0f718a962fb9e99a139913010ef41a00a60ce59aeb" exitCode=0 Nov 26 07:23:35 crc kubenswrapper[4909]: I1126 07:23:35.023461 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9c706f70-09e6-4c92-b656-aaa6b5de2ecd","Type":"ContainerDied","Data":"9e344311001948682e34ef0f718a962fb9e99a139913010ef41a00a60ce59aeb"} Nov 26 07:23:36 crc kubenswrapper[4909]: I1126 07:23:36.391064 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:36 crc kubenswrapper[4909]: I1126 07:23:36.510801 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kube-api-access\") pod \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " Nov 26 07:23:36 crc kubenswrapper[4909]: I1126 07:23:36.511014 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kubelet-dir\") pod \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\" (UID: \"9c706f70-09e6-4c92-b656-aaa6b5de2ecd\") " Nov 26 07:23:36 crc kubenswrapper[4909]: I1126 07:23:36.511139 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9c706f70-09e6-4c92-b656-aaa6b5de2ecd" (UID: "9c706f70-09e6-4c92-b656-aaa6b5de2ecd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:23:36 crc kubenswrapper[4909]: I1126 07:23:36.511500 4909 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:23:36 crc kubenswrapper[4909]: I1126 07:23:36.516817 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9c706f70-09e6-4c92-b656-aaa6b5de2ecd" (UID: "9c706f70-09e6-4c92-b656-aaa6b5de2ecd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:23:36 crc kubenswrapper[4909]: I1126 07:23:36.612374 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c706f70-09e6-4c92-b656-aaa6b5de2ecd-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:23:37 crc kubenswrapper[4909]: I1126 07:23:37.053298 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9c706f70-09e6-4c92-b656-aaa6b5de2ecd","Type":"ContainerDied","Data":"d2731f5cb6ece6e7fd6946a2627593d2a713bd2161e95a9702f86ba550706647"} Nov 26 07:23:37 crc kubenswrapper[4909]: I1126 07:23:37.053388 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2731f5cb6ece6e7fd6946a2627593d2a713bd2161e95a9702f86ba550706647" Nov 26 07:23:37 crc kubenswrapper[4909]: I1126 07:23:37.053808 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.670457 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 07:23:39 crc kubenswrapper[4909]: E1126 07:23:39.671070 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c706f70-09e6-4c92-b656-aaa6b5de2ecd" containerName="pruner" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.671085 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c706f70-09e6-4c92-b656-aaa6b5de2ecd" containerName="pruner" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.671250 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c706f70-09e6-4c92-b656-aaa6b5de2ecd" containerName="pruner" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.671780 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.676190 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.676489 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.687970 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.858255 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49be5a2-7d31-4cf1-89cb-205755ea8592-kube-api-access\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.858339 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.858468 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-var-lock\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.959722 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49be5a2-7d31-4cf1-89cb-205755ea8592-kube-api-access\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.960233 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.960333 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.960749 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-var-lock\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.960942 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-var-lock\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:39 crc kubenswrapper[4909]: I1126 07:23:39.979726 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49be5a2-7d31-4cf1-89cb-205755ea8592-kube-api-access\") pod \"installer-9-crc\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:40 crc kubenswrapper[4909]: I1126 07:23:40.006748 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:23:40 crc kubenswrapper[4909]: I1126 07:23:40.541717 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 07:23:41 crc kubenswrapper[4909]: I1126 07:23:41.103231 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f49be5a2-7d31-4cf1-89cb-205755ea8592","Type":"ContainerStarted","Data":"6650410568d74b0d653977c961172b4b151c4dc48c1ce6d80af1b10bfd2a1e5e"} Nov 26 07:23:41 crc kubenswrapper[4909]: I1126 07:23:41.103662 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f49be5a2-7d31-4cf1-89cb-205755ea8592","Type":"ContainerStarted","Data":"0bb301a07d1b5f9877f994cdfff75c631ab676004063942bbaf9289b23da4ecc"} Nov 26 07:23:41 crc kubenswrapper[4909]: I1126 07:23:41.138691 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.138652905 podStartE2EDuration="2.138652905s" podCreationTimestamp="2025-11-26 07:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:23:41.130473021 +0000 UTC m=+1393.276684227" watchObservedRunningTime="2025-11-26 07:23:41.138652905 +0000 UTC m=+1393.284864121" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.109739 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkv9q"] Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.116992 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.128218 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkv9q"] Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.157849 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l96m\" (UniqueName: \"kubernetes.io/projected/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-kube-api-access-5l96m\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.157923 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-catalog-content\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.157953 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-utilities\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.259555 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l96m\" (UniqueName: \"kubernetes.io/projected/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-kube-api-access-5l96m\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.259896 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-catalog-content\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.260553 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-catalog-content\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.260790 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-utilities\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.261081 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-utilities\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.281699 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l96m\" (UniqueName: \"kubernetes.io/projected/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-kube-api-access-5l96m\") pod \"redhat-marketplace-hkv9q\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.459811 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:46 crc kubenswrapper[4909]: I1126 07:23:46.906378 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkv9q"] Nov 26 07:23:47 crc kubenswrapper[4909]: I1126 07:23:47.167557 4909 generic.go:334] "Generic (PLEG): container finished" podID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerID="6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983" exitCode=0 Nov 26 07:23:47 crc kubenswrapper[4909]: I1126 07:23:47.167631 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkv9q" event={"ID":"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d","Type":"ContainerDied","Data":"6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983"} Nov 26 07:23:47 crc kubenswrapper[4909]: I1126 07:23:47.167669 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkv9q" event={"ID":"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d","Type":"ContainerStarted","Data":"69000ad6180db320a78bec03058098f0264fb716e52f5ac9fa1f80ee4c33ca5b"} Nov 26 07:23:47 crc kubenswrapper[4909]: I1126 07:23:47.169776 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 07:23:48 crc kubenswrapper[4909]: I1126 07:23:48.181306 4909 generic.go:334] "Generic (PLEG): container finished" podID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerID="c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798" exitCode=0 Nov 26 07:23:48 crc kubenswrapper[4909]: I1126 07:23:48.181378 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkv9q" event={"ID":"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d","Type":"ContainerDied","Data":"c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798"} Nov 26 07:23:49 crc kubenswrapper[4909]: I1126 07:23:49.195851 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkv9q" event={"ID":"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d","Type":"ContainerStarted","Data":"b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c"} Nov 26 07:23:49 crc kubenswrapper[4909]: I1126 07:23:49.220989 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkv9q" podStartSLOduration=1.631191719 podStartE2EDuration="3.220962501s" podCreationTimestamp="2025-11-26 07:23:46 +0000 UTC" firstStartedPulling="2025-11-26 07:23:47.169488519 +0000 UTC m=+1399.315699685" lastFinishedPulling="2025-11-26 07:23:48.759259261 +0000 UTC m=+1400.905470467" observedRunningTime="2025-11-26 07:23:49.213711643 +0000 UTC m=+1401.359922819" watchObservedRunningTime="2025-11-26 07:23:49.220962501 +0000 UTC m=+1401.367173687" Nov 26 07:23:56 crc kubenswrapper[4909]: I1126 07:23:56.460978 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:56 crc kubenswrapper[4909]: I1126 07:23:56.462473 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:56 crc kubenswrapper[4909]: I1126 07:23:56.525957 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:57 crc kubenswrapper[4909]: I1126 07:23:57.331564 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:57 crc kubenswrapper[4909]: I1126 07:23:57.393151 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkv9q"] Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.294057 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkv9q" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="registry-server" containerID="cri-o://b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c" gracePeriod=2 Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.732959 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.861600 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l96m\" (UniqueName: \"kubernetes.io/projected/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-kube-api-access-5l96m\") pod \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.861686 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-catalog-content\") pod \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.861708 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-utilities\") pod \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\" (UID: \"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d\") " Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.862667 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-utilities" (OuterVolumeSpecName: "utilities") pod "a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" (UID: "a4d5c17b-f35a-421a-952c-03fe4f0c9b0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.869640 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-kube-api-access-5l96m" (OuterVolumeSpecName: "kube-api-access-5l96m") pod "a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" (UID: "a4d5c17b-f35a-421a-952c-03fe4f0c9b0d"). InnerVolumeSpecName "kube-api-access-5l96m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.883316 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" (UID: "a4d5c17b-f35a-421a-952c-03fe4f0c9b0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.963853 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l96m\" (UniqueName: \"kubernetes.io/projected/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-kube-api-access-5l96m\") on node \"crc\" DevicePath \"\"" Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.963903 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:23:59 crc kubenswrapper[4909]: I1126 07:23:59.963916 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.307701 4909 generic.go:334] "Generic (PLEG): container finished" podID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerID="b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c" exitCode=0 Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.307768 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkv9q" event={"ID":"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d","Type":"ContainerDied","Data":"b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c"} Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.307802 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkv9q" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.307831 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkv9q" event={"ID":"a4d5c17b-f35a-421a-952c-03fe4f0c9b0d","Type":"ContainerDied","Data":"69000ad6180db320a78bec03058098f0264fb716e52f5ac9fa1f80ee4c33ca5b"} Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.307862 4909 scope.go:117] "RemoveContainer" containerID="b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.357188 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkv9q"] Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.363625 4909 scope.go:117] "RemoveContainer" containerID="c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.369858 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkv9q"] Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.393537 4909 scope.go:117] "RemoveContainer" containerID="6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.411184 4909 scope.go:117] "RemoveContainer" containerID="b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c" Nov 26 07:24:00 crc kubenswrapper[4909]: E1126 07:24:00.411732 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c\": container with ID starting with b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c not found: ID does not exist" containerID="b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.411776 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c"} err="failed to get container status \"b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c\": rpc error: code = NotFound desc = could not find container \"b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c\": container with ID starting with b6a8691c5cd49ab4aa0a6418d0eccad2a994f87ded53fc2fbee5c91aefd1e68c not found: ID does not exist" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.411804 4909 scope.go:117] "RemoveContainer" containerID="c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798" Nov 26 07:24:00 crc kubenswrapper[4909]: E1126 07:24:00.412152 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798\": container with ID starting with c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798 not found: ID does not exist" containerID="c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.412182 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798"} err="failed to get container status \"c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798\": rpc error: code = NotFound desc = could not find container \"c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798\": container with ID starting with c30b53d5fb258b2e4213b2f5aec97a8bc0d5c27e364dee9630340a8ccbee4798 not found: ID does not exist" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.412232 4909 scope.go:117] "RemoveContainer" containerID="6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983" Nov 26 07:24:00 crc kubenswrapper[4909]: E1126 07:24:00.412490 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983\": container with ID starting with 6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983 not found: ID does not exist" containerID="6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.412512 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983"} err="failed to get container status \"6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983\": rpc error: code = NotFound desc = could not find container \"6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983\": container with ID starting with 6338776be046159e21ef9680559479c298be1d3b62d697cbc0bb150c3b3b9983 not found: ID does not exist" Nov 26 07:24:00 crc kubenswrapper[4909]: I1126 07:24:00.511991 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" path="/var/lib/kubelet/pods/a4d5c17b-f35a-421a-952c-03fe4f0c9b0d/volumes" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.082678 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kjkw9"] Nov 26 07:24:06 crc kubenswrapper[4909]: E1126 07:24:06.083626 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="extract-content" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.083641 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="extract-content" Nov 26 07:24:06 crc kubenswrapper[4909]: E1126 07:24:06.083657 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="registry-server" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.083664 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="registry-server" Nov 26 07:24:06 crc kubenswrapper[4909]: E1126 07:24:06.083676 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="extract-utilities" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.083684 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="extract-utilities" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.083883 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d5c17b-f35a-421a-952c-03fe4f0c9b0d" containerName="registry-server" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.085364 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.090809 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjkw9"] Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.267651 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwcqz\" (UniqueName: \"kubernetes.io/projected/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-kube-api-access-vwcqz\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.268353 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-catalog-content\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.268386 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-utilities\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.370007 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-catalog-content\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.370049 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-utilities\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.370122 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwcqz\" (UniqueName: \"kubernetes.io/projected/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-kube-api-access-vwcqz\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.370546 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-catalog-content\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.370621 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-utilities\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.396402 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwcqz\" (UniqueName: \"kubernetes.io/projected/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-kube-api-access-vwcqz\") pod \"redhat-operators-kjkw9\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.413162 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:06 crc kubenswrapper[4909]: I1126 07:24:06.675826 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjkw9"] Nov 26 07:24:07 crc kubenswrapper[4909]: I1126 07:24:07.368778 4909 generic.go:334] "Generic (PLEG): container finished" podID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerID="e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c" exitCode=0 Nov 26 07:24:07 crc kubenswrapper[4909]: I1126 07:24:07.368830 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjkw9" event={"ID":"e3c784d2-fc47-4ff8-b47e-e05fcf89613a","Type":"ContainerDied","Data":"e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c"} Nov 26 07:24:07 crc kubenswrapper[4909]: I1126 07:24:07.368859 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjkw9" event={"ID":"e3c784d2-fc47-4ff8-b47e-e05fcf89613a","Type":"ContainerStarted","Data":"d1d6aa1944f2bc2f3e3aedc93cb4bd5bc3d1a3ddafc93d1bde87eefd71d4f3d7"} Nov 26 07:24:08 crc kubenswrapper[4909]: I1126 07:24:08.377426 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjkw9" event={"ID":"e3c784d2-fc47-4ff8-b47e-e05fcf89613a","Type":"ContainerStarted","Data":"ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848"} Nov 26 07:24:09 crc kubenswrapper[4909]: I1126 07:24:09.388725 4909 generic.go:334] "Generic (PLEG): container finished" podID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerID="ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848" exitCode=0 Nov 26 07:24:09 crc kubenswrapper[4909]: I1126 07:24:09.388775 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjkw9" event={"ID":"e3c784d2-fc47-4ff8-b47e-e05fcf89613a","Type":"ContainerDied","Data":"ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848"} Nov 26 07:24:10 crc kubenswrapper[4909]: I1126 07:24:10.402090 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjkw9" event={"ID":"e3c784d2-fc47-4ff8-b47e-e05fcf89613a","Type":"ContainerStarted","Data":"2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e"} Nov 26 07:24:10 crc kubenswrapper[4909]: I1126 07:24:10.429401 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kjkw9" podStartSLOduration=2.008139524 podStartE2EDuration="4.42938238s" podCreationTimestamp="2025-11-26 07:24:06 +0000 UTC" firstStartedPulling="2025-11-26 07:24:07.370654471 +0000 UTC m=+1419.516865647" lastFinishedPulling="2025-11-26 07:24:09.791897317 +0000 UTC m=+1421.938108503" observedRunningTime="2025-11-26 07:24:10.429352759 +0000 UTC m=+1422.575563925" watchObservedRunningTime="2025-11-26 07:24:10.42938238 +0000 UTC m=+1422.575593546" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.464800 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nc94b"] Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.466368 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.491352 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nc94b"] Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.622021 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-utilities\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.622197 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ffd\" (UniqueName: \"kubernetes.io/projected/7827358f-2d3b-47de-9f4e-80e0fbd67758-kube-api-access-42ffd\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.622240 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-catalog-content\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.723764 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-utilities\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.723886 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ffd\" (UniqueName: \"kubernetes.io/projected/7827358f-2d3b-47de-9f4e-80e0fbd67758-kube-api-access-42ffd\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.723922 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-catalog-content\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.724466 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-catalog-content\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.724776 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-utilities\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.744733 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ffd\" (UniqueName: \"kubernetes.io/projected/7827358f-2d3b-47de-9f4e-80e0fbd67758-kube-api-access-42ffd\") pod \"certified-operators-nc94b\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:13 crc kubenswrapper[4909]: I1126 07:24:13.785484 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:14 crc kubenswrapper[4909]: I1126 07:24:14.259056 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nc94b"] Nov 26 07:24:14 crc kubenswrapper[4909]: W1126 07:24:14.262205 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7827358f_2d3b_47de_9f4e_80e0fbd67758.slice/crio-0787fdaa2005b65517c4a2d92d0e354bb05ca9dc0b7ceabbb5dd876f8ea9eb06 WatchSource:0}: Error finding container 0787fdaa2005b65517c4a2d92d0e354bb05ca9dc0b7ceabbb5dd876f8ea9eb06: Status 404 returned error can't find the container with id 0787fdaa2005b65517c4a2d92d0e354bb05ca9dc0b7ceabbb5dd876f8ea9eb06 Nov 26 07:24:14 crc kubenswrapper[4909]: I1126 07:24:14.438111 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nc94b" event={"ID":"7827358f-2d3b-47de-9f4e-80e0fbd67758","Type":"ContainerStarted","Data":"0787fdaa2005b65517c4a2d92d0e354bb05ca9dc0b7ceabbb5dd876f8ea9eb06"} Nov 26 07:24:15 crc kubenswrapper[4909]: I1126 07:24:15.450968 4909 generic.go:334] "Generic (PLEG): container finished" podID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerID="20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056" exitCode=0 Nov 26 07:24:15 crc kubenswrapper[4909]: I1126 07:24:15.451009 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nc94b" event={"ID":"7827358f-2d3b-47de-9f4e-80e0fbd67758","Type":"ContainerDied","Data":"20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056"} Nov 26 07:24:16 crc kubenswrapper[4909]: I1126 07:24:16.413582 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:16 crc kubenswrapper[4909]: I1126 07:24:16.413733 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:16 crc kubenswrapper[4909]: I1126 07:24:16.461674 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:16 crc kubenswrapper[4909]: I1126 07:24:16.468524 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nc94b" event={"ID":"7827358f-2d3b-47de-9f4e-80e0fbd67758","Type":"ContainerStarted","Data":"7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b"} Nov 26 07:24:16 crc kubenswrapper[4909]: I1126 07:24:16.514439 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:24:17 crc kubenswrapper[4909]: I1126 07:24:17.478524 4909 generic.go:334] "Generic (PLEG): container finished" podID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerID="7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b" exitCode=0 Nov 26 07:24:17 crc kubenswrapper[4909]: I1126 07:24:17.478612 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nc94b" event={"ID":"7827358f-2d3b-47de-9f4e-80e0fbd67758","Type":"ContainerDied","Data":"7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b"} Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.492300 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nc94b" event={"ID":"7827358f-2d3b-47de-9f4e-80e0fbd67758","Type":"ContainerStarted","Data":"ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6"} Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.516280 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nc94b" podStartSLOduration=3.05704836 podStartE2EDuration="5.516264252s" podCreationTimestamp="2025-11-26 07:24:13 +0000 UTC" firstStartedPulling="2025-11-26 07:24:15.452699453 +0000 UTC m=+1427.598910629" lastFinishedPulling="2025-11-26 07:24:17.911915355 +0000 UTC m=+1430.058126521" observedRunningTime="2025-11-26 07:24:18.511761479 +0000 UTC m=+1430.657972675" watchObservedRunningTime="2025-11-26 07:24:18.516264252 +0000 UTC m=+1430.662475418" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.860117 4909 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.860488 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4" gracePeriod=15 Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.860531 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7" gracePeriod=15 Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.860516 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126" gracePeriod=15 Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.860514 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df" gracePeriod=15 Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.860475 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f" gracePeriod=15 Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.861875 4909 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 07:24:18 crc kubenswrapper[4909]: E1126 07:24:18.862210 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862232 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 07:24:18 crc kubenswrapper[4909]: E1126 07:24:18.862251 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862259 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 07:24:18 crc kubenswrapper[4909]: E1126 07:24:18.862273 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862281 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 26 07:24:18 crc kubenswrapper[4909]: E1126 07:24:18.862300 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862307 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 07:24:18 crc kubenswrapper[4909]: E1126 07:24:18.862314 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862321 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 07:24:18 crc kubenswrapper[4909]: E1126 07:24:18.862332 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862339 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 07:24:18 crc kubenswrapper[4909]: E1126 07:24:18.862374 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862382 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862552 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862569 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862579 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862606 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862623 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.862633 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.864731 4909 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.865452 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.869994 4909 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.902827 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.902863 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:18 crc kubenswrapper[4909]: I1126 07:24:18.902888 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.004327 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.004726 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.004448 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.004817 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.004867 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.004956 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.004881 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.005024 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.005103 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.005128 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.005158 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106167 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106256 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106275 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106327 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106321 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106346 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106406 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106388 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106404 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.106387 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.501017 4909 generic.go:334] "Generic (PLEG): container finished" podID="f49be5a2-7d31-4cf1-89cb-205755ea8592" containerID="6650410568d74b0d653977c961172b4b151c4dc48c1ce6d80af1b10bfd2a1e5e" exitCode=0 Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.501077 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f49be5a2-7d31-4cf1-89cb-205755ea8592","Type":"ContainerDied","Data":"6650410568d74b0d653977c961172b4b151c4dc48c1ce6d80af1b10bfd2a1e5e"} Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.501807 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.504186 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.505215 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 07:24:19 crc kubenswrapper[4909]: I1126 07:24:19.506162 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df" exitCode=2 Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.522350 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.524975 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.526127 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4" exitCode=0 Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.526172 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7" exitCode=0 Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.526183 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126" exitCode=0 Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.526262 4909 scope.go:117] "RemoveContainer" containerID="4bca3a6218464ea5d6646620e60830e4b927d076c0d2d1ae58b9cb715f805218" Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.855939 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:24:20 crc kubenswrapper[4909]: I1126 07:24:20.857037 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.045276 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49be5a2-7d31-4cf1-89cb-205755ea8592-kube-api-access\") pod \"f49be5a2-7d31-4cf1-89cb-205755ea8592\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.045625 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-var-lock\") pod \"f49be5a2-7d31-4cf1-89cb-205755ea8592\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.045647 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-kubelet-dir\") pod \"f49be5a2-7d31-4cf1-89cb-205755ea8592\" (UID: \"f49be5a2-7d31-4cf1-89cb-205755ea8592\") " Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.045865 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f49be5a2-7d31-4cf1-89cb-205755ea8592" (UID: "f49be5a2-7d31-4cf1-89cb-205755ea8592"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.045894 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-var-lock" (OuterVolumeSpecName: "var-lock") pod "f49be5a2-7d31-4cf1-89cb-205755ea8592" (UID: "f49be5a2-7d31-4cf1-89cb-205755ea8592"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.051972 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49be5a2-7d31-4cf1-89cb-205755ea8592-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f49be5a2-7d31-4cf1-89cb-205755ea8592" (UID: "f49be5a2-7d31-4cf1-89cb-205755ea8592"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.146765 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49be5a2-7d31-4cf1-89cb-205755ea8592-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.146798 4909 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-var-lock\") on node \"crc\" DevicePath \"\"" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.146810 4909 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f49be5a2-7d31-4cf1-89cb-205755ea8592-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.469096 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.470474 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.471366 4909 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.472003 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.539190 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f49be5a2-7d31-4cf1-89cb-205755ea8592","Type":"ContainerDied","Data":"0bb301a07d1b5f9877f994cdfff75c631ab676004063942bbaf9289b23da4ecc"} Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.539238 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb301a07d1b5f9877f994cdfff75c631ab676004063942bbaf9289b23da4ecc" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.539255 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.546363 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.547370 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f" exitCode=0 Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.547462 4909 scope.go:117] "RemoveContainer" containerID="243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.547468 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.564144 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.565057 4909 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.587318 4909 scope.go:117] "RemoveContainer" containerID="a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.603801 4909 scope.go:117] "RemoveContainer" containerID="2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.653502 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.653553 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.653692 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.653737 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.653745 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.653847 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.654490 4909 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.654533 4909 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.654543 4909 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.704959 4909 scope.go:117] "RemoveContainer" containerID="cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.720601 4909 scope.go:117] "RemoveContainer" containerID="f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.739711 4909 scope.go:117] "RemoveContainer" containerID="efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.761492 4909 scope.go:117] "RemoveContainer" containerID="243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4" Nov 26 07:24:21 crc kubenswrapper[4909]: E1126 07:24:21.762038 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\": container with ID starting with 243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4 not found: ID does not exist" containerID="243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.762070 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4"} err="failed to get container status \"243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\": rpc error: code = NotFound desc = could not find container \"243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4\": container with ID starting with 243bcc249ac6b35be4f077063d12b3c57ba21558f803de1eccfa674756772db4 not found: ID does not exist" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.762099 4909 scope.go:117] "RemoveContainer" containerID="a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7" Nov 26 07:24:21 crc kubenswrapper[4909]: E1126 07:24:21.762470 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\": container with ID starting with a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7 not found: ID does not exist" containerID="a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.762497 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7"} err="failed to get container status \"a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\": rpc error: code = NotFound desc = could not find container \"a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7\": container with ID starting with a8a4a185201df3788e04ad531e1600426c24041b461a7b317ab26389942987c7 not found: ID does not exist" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.762515 4909 scope.go:117] "RemoveContainer" containerID="2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126" Nov 26 07:24:21 crc kubenswrapper[4909]: E1126 07:24:21.762772 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\": container with ID starting with 2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126 not found: ID does not exist" containerID="2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.762797 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126"} err="failed to get container status \"2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\": rpc error: code = NotFound desc = could not find container \"2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126\": container with ID starting with 2101f1e8945f119f7e4817ef1f5e05c04b79857c7fb5fff5fbdef305afa09126 not found: ID does not exist" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.762813 4909 scope.go:117] "RemoveContainer" containerID="cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df" Nov 26 07:24:21 crc kubenswrapper[4909]: E1126 07:24:21.763088 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\": container with ID starting with cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df not found: ID does not exist" containerID="cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.763191 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df"} err="failed to get container status \"cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\": rpc error: code = NotFound desc = could not find container \"cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df\": container with ID starting with cd30639fcd841b29c4d4be7f55b5ab120dec4fede50ef5e92d98a6f3743a32df not found: ID does not exist" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.763253 4909 scope.go:117] "RemoveContainer" containerID="f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f" Nov 26 07:24:21 crc kubenswrapper[4909]: E1126 07:24:21.763764 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\": container with ID starting with f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f not found: ID does not exist" containerID="f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.763787 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f"} err="failed to get container status \"f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\": rpc error: code = NotFound desc = could not find container \"f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f\": container with ID starting with f8cde0628354e1f930b51e8c18358fb8460790fb3ca5fed6f6e5ac3fc2abf66f not found: ID does not exist" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.763806 4909 scope.go:117] "RemoveContainer" containerID="efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1" Nov 26 07:24:21 crc kubenswrapper[4909]: E1126 07:24:21.764166 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\": container with ID starting with efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1 not found: ID does not exist" containerID="efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.764225 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1"} err="failed to get container status \"efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\": rpc error: code = NotFound desc = could not find container \"efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1\": container with ID starting with efd830819b52191b906d12640532535666a2d1e21db1bd000fe866e46570c2c1 not found: ID does not exist" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.864176 4909 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:21 crc kubenswrapper[4909]: I1126 07:24:21.864515 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:22 crc kubenswrapper[4909]: I1126 07:24:22.507209 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.132174 4909 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.132689 4909 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.133050 4909 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.133416 4909 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.133744 4909 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:23 crc kubenswrapper[4909]: I1126 07:24:23.133773 4909 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.134128 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="200ms" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.335563 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="400ms" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.737142 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="800ms" Nov 26 07:24:23 crc kubenswrapper[4909]: I1126 07:24:23.785689 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:23 crc kubenswrapper[4909]: I1126 07:24:23.785731 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:23 crc kubenswrapper[4909]: I1126 07:24:23.847964 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:23 crc kubenswrapper[4909]: I1126 07:24:23.848558 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:23 crc kubenswrapper[4909]: I1126 07:24:23.848959 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.903069 4909 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.206:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:23 crc kubenswrapper[4909]: I1126 07:24:23.903661 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:23 crc kubenswrapper[4909]: E1126 07:24:23.931363 4909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.206:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b7dadacf79086 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 07:24:23.930400902 +0000 UTC m=+1436.076612108,LastTimestamp:2025-11-26 07:24:23.930400902 +0000 UTC m=+1436.076612108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 07:24:24 crc kubenswrapper[4909]: E1126 07:24:24.538393 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="1.6s" Nov 26 07:24:24 crc kubenswrapper[4909]: I1126 07:24:24.577799 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b"} Nov 26 07:24:24 crc kubenswrapper[4909]: I1126 07:24:24.577851 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a37224d69228d150e352455f4f9cc670d4c0aa785fffc5be7e038538d9196cd9"} Nov 26 07:24:24 crc kubenswrapper[4909]: E1126 07:24:24.578967 4909 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.206:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:24:24 crc kubenswrapper[4909]: I1126 07:24:24.579121 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:24 crc kubenswrapper[4909]: I1126 07:24:24.579562 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:24 crc kubenswrapper[4909]: I1126 07:24:24.619300 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:24:24 crc kubenswrapper[4909]: I1126 07:24:24.620114 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:24 crc kubenswrapper[4909]: I1126 07:24:24.620518 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:26 crc kubenswrapper[4909]: E1126 07:24:26.139902 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="3.2s" Nov 26 07:24:28 crc kubenswrapper[4909]: I1126 07:24:28.503453 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:28 crc kubenswrapper[4909]: I1126 07:24:28.504420 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:29 crc kubenswrapper[4909]: E1126 07:24:29.341149 4909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.206:6443: connect: connection refused" interval="6.4s" Nov 26 07:24:30 crc kubenswrapper[4909]: I1126 07:24:30.639976 4909 generic.go:334] "Generic (PLEG): container finished" podID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" containerID="7f9df10f4906ec056b4ebd72b47a41386a1efb2995578f479966635a3c32ee18" exitCode=1 Nov 26 07:24:30 crc kubenswrapper[4909]: I1126 07:24:30.640083 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" event={"ID":"8ace07e4-e65b-451c-8623-f71b4f7d4f14","Type":"ContainerDied","Data":"7f9df10f4906ec056b4ebd72b47a41386a1efb2995578f479966635a3c32ee18"} Nov 26 07:24:30 crc kubenswrapper[4909]: I1126 07:24:30.641523 4909 scope.go:117] "RemoveContainer" containerID="7f9df10f4906ec056b4ebd72b47a41386a1efb2995578f479966635a3c32ee18" Nov 26 07:24:30 crc kubenswrapper[4909]: I1126 07:24:30.642066 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:30 crc kubenswrapper[4909]: I1126 07:24:30.642496 4909 status_manager.go:851] "Failed to get status for pod" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-58dcdd989d-ctkx2\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:30 crc kubenswrapper[4909]: I1126 07:24:30.643467 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.651570 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.651927 4909 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c" exitCode=1 Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.652022 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c"} Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.652534 4909 scope.go:117] "RemoveContainer" containerID="00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.653339 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.653836 4909 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.654285 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.655040 4909 status_manager.go:851] "Failed to get status for pod" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-58dcdd989d-ctkx2\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.655829 4909 generic.go:334] "Generic (PLEG): container finished" podID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" containerID="9b4d011e536fc46d2cb5d3846c98b802e172f0184e45ea0504ce2fd123dfb7ca" exitCode=1 Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.655889 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" event={"ID":"8ace07e4-e65b-451c-8623-f71b4f7d4f14","Type":"ContainerDied","Data":"9b4d011e536fc46d2cb5d3846c98b802e172f0184e45ea0504ce2fd123dfb7ca"} Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.656766 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.656017 4909 scope.go:117] "RemoveContainer" containerID="7f9df10f4906ec056b4ebd72b47a41386a1efb2995578f479966635a3c32ee18" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.657174 4909 scope.go:117] "RemoveContainer" containerID="9b4d011e536fc46d2cb5d3846c98b802e172f0184e45ea0504ce2fd123dfb7ca" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.657267 4909 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: E1126 07:24:31.657664 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-58dcdd989d-ctkx2_metallb-system(8ace07e4-e65b-451c-8623-f71b4f7d4f14)\"" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.657809 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: I1126 07:24:31.658348 4909 status_manager.go:851] "Failed to get status for pod" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-58dcdd989d-ctkx2\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:31 crc kubenswrapper[4909]: E1126 07:24:31.907932 4909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.206:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b7dadacf79086 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 07:24:23.930400902 +0000 UTC m=+1436.076612108,LastTimestamp:2025-11-26 07:24:23.930400902 +0000 UTC m=+1436.076612108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.317543 4909 scope.go:117] "RemoveContainer" containerID="259ac1f9f264c143c227a93f94d048fd7b340bc2d8592bce6cce59d08b832a6e" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.347805 4909 scope.go:117] "RemoveContainer" containerID="9de267a3ae62263d011dcd2f78926c503195c746bfce60ac6d585cd418181fee" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.494087 4909 scope.go:117] "RemoveContainer" containerID="abe8173aaa3344ab6d25c2b1142d4624d7cc8df8e25e8e9e5721a5c2abddfc18" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.525490 4909 scope.go:117] "RemoveContainer" containerID="a086419c64150860bdc1ce9fa4c0c19c2999a5f8f64e93307a4393715ab39abc" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.554739 4909 scope.go:117] "RemoveContainer" containerID="e481b641f17f22d80faee3fa2370145fafe49f1f7b46a9411e55d35dfb5b767d" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.653165 4909 scope.go:117] "RemoveContainer" containerID="e3e7f230581faf36b2c79ad68aca0174468cc8fd033f7e484814c3189b2ac392" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.654472 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.676194 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.676326 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e6c9ad8f6da8cf9699e7224aa5b0a87401d251855213141056855b774be1077"} Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.677521 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.677843 4909 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.678202 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.678403 4909 status_manager.go:851] "Failed to get status for pod" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-58dcdd989d-ctkx2\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.682491 4909 scope.go:117] "RemoveContainer" containerID="9b4d011e536fc46d2cb5d3846c98b802e172f0184e45ea0504ce2fd123dfb7ca" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.682674 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: E1126 07:24:32.682760 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-58dcdd989d-ctkx2_metallb-system(8ace07e4-e65b-451c-8623-f71b4f7d4f14)\"" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.682919 4909 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.683229 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.683416 4909 status_manager.go:851] "Failed to get status for pod" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-58dcdd989d-ctkx2\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.685635 4909 scope.go:117] "RemoveContainer" containerID="7b1150b9f4dbb87e6854cf42f0e7aaa92f775964a5ecab7b7673a2766cabc798" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.709862 4909 scope.go:117] "RemoveContainer" containerID="304b4f863a8089e3faba398f81716b22c7a1e24312716d91f2e8e42dc45b0c88" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.761411 4909 scope.go:117] "RemoveContainer" containerID="af878b4cd5af5890eb29ddd41d3c62358d147f435921a892c7cd87cef16edc9d" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.788186 4909 scope.go:117] "RemoveContainer" containerID="7ffc69ceeef9cb263000a0891df54bc89be9425b8af470572c9407553344e65c" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.811965 4909 scope.go:117] "RemoveContainer" containerID="96ff5b9f7374832505846555fd743e47ed81c4cea93def2037316c077db458ff" Nov 26 07:24:32 crc kubenswrapper[4909]: I1126 07:24:32.951063 4909 scope.go:117] "RemoveContainer" containerID="8a0d13185b0fd0f077d49e18f1b8a3c5a33b10dd4e5c9d4f488c90bb166a1761" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.501881 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.503014 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.503187 4909 status_manager.go:851] "Failed to get status for pod" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-58dcdd989d-ctkx2\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.503344 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.503511 4909 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.519851 4909 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.519902 4909 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:33 crc kubenswrapper[4909]: E1126 07:24:33.520519 4909 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.521423 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:33 crc kubenswrapper[4909]: W1126 07:24:33.571399 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ec144c41bfc8491b8b03ebd5f9f3ad1019822c635d307b3af400d5190e37a90e WatchSource:0}: Error finding container ec144c41bfc8491b8b03ebd5f9f3ad1019822c635d307b3af400d5190e37a90e: Status 404 returned error can't find the container with id ec144c41bfc8491b8b03ebd5f9f3ad1019822c635d307b3af400d5190e37a90e Nov 26 07:24:33 crc kubenswrapper[4909]: I1126 07:24:33.694819 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ec144c41bfc8491b8b03ebd5f9f3ad1019822c635d307b3af400d5190e37a90e"} Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.706466 4909 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="488a08a7f507c17e15f01be58830f70e8e0be830bb4d529795fd31b8b149b3f8" exitCode=0 Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.706528 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"488a08a7f507c17e15f01be58830f70e8e0be830bb4d529795fd31b8b149b3f8"} Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.706975 4909 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.707005 4909 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:34 crc kubenswrapper[4909]: E1126 07:24:34.707704 4909 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.708260 4909 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.708700 4909 status_manager.go:851] "Failed to get status for pod" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.709160 4909 status_manager.go:851] "Failed to get status for pod" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-58dcdd989d-ctkx2\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:34 crc kubenswrapper[4909]: I1126 07:24:34.709580 4909 status_manager.go:851] "Failed to get status for pod" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" pod="openshift-marketplace/certified-operators-nc94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nc94b\": dial tcp 38.129.56.206:6443: connect: connection refused" Nov 26 07:24:35 crc kubenswrapper[4909]: I1126 07:24:35.717947 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21daeded32598ac44cfa9cceb9000a61065b2eea1725d333e067ee985c3cce4b"} Nov 26 07:24:35 crc kubenswrapper[4909]: I1126 07:24:35.718296 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"db80104787c937380adad27036aecd327466f490652764d5952b90bdc288856b"} Nov 26 07:24:35 crc kubenswrapper[4909]: I1126 07:24:35.718311 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6911f736cedc7e96f7054d5161f3f0a2bd8089f26fad03f3f2c39226dd08cabb"} Nov 26 07:24:36 crc kubenswrapper[4909]: I1126 07:24:36.729554 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"67dbc7c6cb8e97250fd875dabd1e27e80bf5de685b62a88d7be4919532445d1e"} Nov 26 07:24:36 crc kubenswrapper[4909]: I1126 07:24:36.729627 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5eee3a128f14362c597b3600ef3b3b7600ec8db0705282b31e4f4a41e204ac5"} Nov 26 07:24:36 crc kubenswrapper[4909]: I1126 07:24:36.729780 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:36 crc kubenswrapper[4909]: I1126 07:24:36.729971 4909 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:36 crc kubenswrapper[4909]: I1126 07:24:36.730007 4909 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:38 crc kubenswrapper[4909]: I1126 07:24:38.521699 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:38 crc kubenswrapper[4909]: I1126 07:24:38.522129 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:38 crc kubenswrapper[4909]: I1126 07:24:38.527625 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.752913 4909 generic.go:334] "Generic (PLEG): container finished" podID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" containerID="79c93578600ecd3758192b8bf7324e13d17a763ba661c201ae2a6a523c5c904a" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.752954 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" event={"ID":"20a1b8f0-7e93-4d4a-b527-7470d128a2bc","Type":"ContainerDied","Data":"79c93578600ecd3758192b8bf7324e13d17a763ba661c201ae2a6a523c5c904a"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.753675 4909 scope.go:117] "RemoveContainer" containerID="79c93578600ecd3758192b8bf7324e13d17a763ba661c201ae2a6a523c5c904a" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.758560 4909 generic.go:334] "Generic (PLEG): container finished" podID="365248fc-0b34-46df-bbdc-043f89694812" containerID="2f0c1af130f72b92aaeb799fac7d478ed2dca786ccfc6225c4b0f1a81938746a" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.758618 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" event={"ID":"365248fc-0b34-46df-bbdc-043f89694812","Type":"ContainerDied","Data":"2f0c1af130f72b92aaeb799fac7d478ed2dca786ccfc6225c4b0f1a81938746a"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.759072 4909 scope.go:117] "RemoveContainer" containerID="2f0c1af130f72b92aaeb799fac7d478ed2dca786ccfc6225c4b0f1a81938746a" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.761532 4909 generic.go:334] "Generic (PLEG): container finished" podID="f8afd5eb-02e8-4a94-be0d-19a709270945" containerID="c7a8b1902520ca416dbc1a1302a978f3619802384f635376b343e714d0c5fa4d" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.761605 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" event={"ID":"f8afd5eb-02e8-4a94-be0d-19a709270945","Type":"ContainerDied","Data":"c7a8b1902520ca416dbc1a1302a978f3619802384f635376b343e714d0c5fa4d"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.761918 4909 scope.go:117] "RemoveContainer" containerID="c7a8b1902520ca416dbc1a1302a978f3619802384f635376b343e714d0c5fa4d" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.764157 4909 generic.go:334] "Generic (PLEG): container finished" podID="af4a09dd-04e0-465d-a817-bacf1a52babe" containerID="bc1d125ffc63dafd4f3c4861ea058ef2d347c94047260544e67516e6c7b32347" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.764213 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" event={"ID":"af4a09dd-04e0-465d-a817-bacf1a52babe","Type":"ContainerDied","Data":"bc1d125ffc63dafd4f3c4861ea058ef2d347c94047260544e67516e6c7b32347"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.764604 4909 scope.go:117] "RemoveContainer" containerID="bc1d125ffc63dafd4f3c4861ea058ef2d347c94047260544e67516e6c7b32347" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.766514 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f41a032-71ff-4608-aa2c-b16469fe55a0" containerID="ac1a2edc25071651334d0ffbc1843b636e077a8204acf9400bfc1803e4395a58" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.766568 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerDied","Data":"ac1a2edc25071651334d0ffbc1843b636e077a8204acf9400bfc1803e4395a58"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.766876 4909 scope.go:117] "RemoveContainer" containerID="ac1a2edc25071651334d0ffbc1843b636e077a8204acf9400bfc1803e4395a58" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.768900 4909 generic.go:334] "Generic (PLEG): container finished" podID="61289245-0b12-4689-8a98-2b24544cacf8" containerID="5cc094590d1ef22f9ab1f460dcda65bde788d8fb0fad9d35cac512b326d5e61a" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.768972 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" event={"ID":"61289245-0b12-4689-8a98-2b24544cacf8","Type":"ContainerDied","Data":"5cc094590d1ef22f9ab1f460dcda65bde788d8fb0fad9d35cac512b326d5e61a"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.769730 4909 scope.go:117] "RemoveContainer" containerID="5cc094590d1ef22f9ab1f460dcda65bde788d8fb0fad9d35cac512b326d5e61a" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.770737 4909 generic.go:334] "Generic (PLEG): container finished" podID="0ebad6d0-e522-4012-869e-903c89bd1703" containerID="f252468407ffea2990ccc044949fabec74a0e6724982ea1fc7ab61ae0f7bdaf7" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.770784 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" event={"ID":"0ebad6d0-e522-4012-869e-903c89bd1703","Type":"ContainerDied","Data":"f252468407ffea2990ccc044949fabec74a0e6724982ea1fc7ab61ae0f7bdaf7"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.771049 4909 scope.go:117] "RemoveContainer" containerID="f252468407ffea2990ccc044949fabec74a0e6724982ea1fc7ab61ae0f7bdaf7" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.772236 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" containerID="04cb65e8b18dc7bdf040f74d21ac9f4198eb4fc44f2cf45dfe14eb8552b1ca17" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.772263 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" event={"ID":"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03","Type":"ContainerDied","Data":"04cb65e8b18dc7bdf040f74d21ac9f4198eb4fc44f2cf45dfe14eb8552b1ca17"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.772774 4909 scope.go:117] "RemoveContainer" containerID="04cb65e8b18dc7bdf040f74d21ac9f4198eb4fc44f2cf45dfe14eb8552b1ca17" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.774020 4909 generic.go:334] "Generic (PLEG): container finished" podID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" containerID="e550c2419b5af6d03322135ecf4f934e6214479b59cdbfb10e48825d20a9314c" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.774098 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" event={"ID":"b3ca7f6d-4dba-4e22-ae42-f4184932fba2","Type":"ContainerDied","Data":"e550c2419b5af6d03322135ecf4f934e6214479b59cdbfb10e48825d20a9314c"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.774566 4909 scope.go:117] "RemoveContainer" containerID="e550c2419b5af6d03322135ecf4f934e6214479b59cdbfb10e48825d20a9314c" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.776667 4909 generic.go:334] "Generic (PLEG): container finished" podID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" containerID="fb7999a7f7ff3cd8b133f4854000a9b97fe779663bdbcc7766d0e10a64451e17" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.776751 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" event={"ID":"8c9c6404-9f47-434c-ac1b-d08cd48d5156","Type":"ContainerDied","Data":"fb7999a7f7ff3cd8b133f4854000a9b97fe779663bdbcc7766d0e10a64451e17"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.777198 4909 scope.go:117] "RemoveContainer" containerID="fb7999a7f7ff3cd8b133f4854000a9b97fe779663bdbcc7766d0e10a64451e17" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.787031 4909 generic.go:334] "Generic (PLEG): container finished" podID="cd83d237-7922-4458-9fce-8c296d0ccc0f" containerID="80fc53263aa7e40af1c8dcd3f54d8e8ed14e57962477c3678d61494efa132dbd" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.787119 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" event={"ID":"cd83d237-7922-4458-9fce-8c296d0ccc0f","Type":"ContainerDied","Data":"80fc53263aa7e40af1c8dcd3f54d8e8ed14e57962477c3678d61494efa132dbd"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.787684 4909 scope.go:117] "RemoveContainer" containerID="80fc53263aa7e40af1c8dcd3f54d8e8ed14e57962477c3678d61494efa132dbd" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.791461 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" event={"ID":"138eaa02-be79-4e16-8627-cc582d5b6770","Type":"ContainerDied","Data":"fd4c4b86e3f3f86cb067ee781caf02d5c897cf7cbba236d11652b50a8feacdc5"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.792381 4909 scope.go:117] "RemoveContainer" containerID="fd4c4b86e3f3f86cb067ee781caf02d5c897cf7cbba236d11652b50a8feacdc5" Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.792734 4909 generic.go:334] "Generic (PLEG): container finished" podID="138eaa02-be79-4e16-8627-cc582d5b6770" containerID="fd4c4b86e3f3f86cb067ee781caf02d5c897cf7cbba236d11652b50a8feacdc5" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.795324 4909 generic.go:334] "Generic (PLEG): container finished" podID="757566f7-a07b-4623-8668-b39f715ea7a9" containerID="d8e79311312214891d8e4375b26e5a78f6dcfccc9c7f94a137bb0e3d16cb2b98" exitCode=1 Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.795366 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" event={"ID":"757566f7-a07b-4623-8668-b39f715ea7a9","Type":"ContainerDied","Data":"d8e79311312214891d8e4375b26e5a78f6dcfccc9c7f94a137bb0e3d16cb2b98"} Nov 26 07:24:39 crc kubenswrapper[4909]: I1126 07:24:39.795866 4909 scope.go:117] "RemoveContainer" containerID="d8e79311312214891d8e4375b26e5a78f6dcfccc9c7f94a137bb0e3d16cb2b98" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.531841 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.532064 4909 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.532300 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.694788 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.810537 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" containerID="64fd166960b86103c5f24488aab8af56cafe6527f188efe85add4a0ba1fe7ca4" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.810633 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" event={"ID":"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03","Type":"ContainerDied","Data":"64fd166960b86103c5f24488aab8af56cafe6527f188efe85add4a0ba1fe7ca4"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.810674 4909 scope.go:117] "RemoveContainer" containerID="04cb65e8b18dc7bdf040f74d21ac9f4198eb4fc44f2cf45dfe14eb8552b1ca17" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.811259 4909 scope.go:117] "RemoveContainer" containerID="64fd166960b86103c5f24488aab8af56cafe6527f188efe85add4a0ba1fe7ca4" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.811497 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-698d6fd7d6-692sc_openstack-operators(f4c87de0-5b1c-44f8-a2fb-1949a3f4af03)\"" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" podUID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.844963 4909 generic.go:334] "Generic (PLEG): container finished" podID="cd83d237-7922-4458-9fce-8c296d0ccc0f" containerID="1e80c5f888565e4393bdf4da78236f3fb1563e2468647edf16843f0be24c0ddb" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.845030 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" event={"ID":"cd83d237-7922-4458-9fce-8c296d0ccc0f","Type":"ContainerDied","Data":"1e80c5f888565e4393bdf4da78236f3fb1563e2468647edf16843f0be24c0ddb"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.845564 4909 scope.go:117] "RemoveContainer" containerID="1e80c5f888565e4393bdf4da78236f3fb1563e2468647edf16843f0be24c0ddb" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.845882 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-6bd966bbd4-6j4kw_openstack-operators(cd83d237-7922-4458-9fce-8c296d0ccc0f)\"" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" podUID="cd83d237-7922-4458-9fce-8c296d0ccc0f" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.861951 4909 generic.go:334] "Generic (PLEG): container finished" podID="61289245-0b12-4689-8a98-2b24544cacf8" containerID="0549ff9b555e8e81a4f23f1dcf94be262695608d9edb908bbc598a4181f19204" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.862029 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" event={"ID":"61289245-0b12-4689-8a98-2b24544cacf8","Type":"ContainerDied","Data":"0549ff9b555e8e81a4f23f1dcf94be262695608d9edb908bbc598a4181f19204"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.862581 4909 scope.go:117] "RemoveContainer" containerID="0549ff9b555e8e81a4f23f1dcf94be262695608d9edb908bbc598a4181f19204" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.862923 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-7979c68bc7-c696l_openstack-operators(61289245-0b12-4689-8a98-2b24544cacf8)\"" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" podUID="61289245-0b12-4689-8a98-2b24544cacf8" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.872489 4909 generic.go:334] "Generic (PLEG): container finished" podID="f8afd5eb-02e8-4a94-be0d-19a709270945" containerID="ec934b246a7b768f7d74a7fdef300a7c00eedb3a9eb3304a0eecbe6905f071c5" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.872545 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" event={"ID":"f8afd5eb-02e8-4a94-be0d-19a709270945","Type":"ContainerDied","Data":"ec934b246a7b768f7d74a7fdef300a7c00eedb3a9eb3304a0eecbe6905f071c5"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.873052 4909 scope.go:117] "RemoveContainer" containerID="ec934b246a7b768f7d74a7fdef300a7c00eedb3a9eb3304a0eecbe6905f071c5" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.873264 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-58487d9bf4-7rjcs_openstack-operators(f8afd5eb-02e8-4a94-be0d-19a709270945)\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" podUID="f8afd5eb-02e8-4a94-be0d-19a709270945" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.875413 4909 generic.go:334] "Generic (PLEG): container finished" podID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" containerID="ab73a6dfe2de6e6ed948050bd837e445392bd72d77314d1a71b0f211e84241d8" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.875452 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" event={"ID":"b3ca7f6d-4dba-4e22-ae42-f4184932fba2","Type":"ContainerDied","Data":"ab73a6dfe2de6e6ed948050bd837e445392bd72d77314d1a71b0f211e84241d8"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.875733 4909 scope.go:117] "RemoveContainer" containerID="ab73a6dfe2de6e6ed948050bd837e445392bd72d77314d1a71b0f211e84241d8" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.875951 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-6788cc6d75-scqbd_openstack-operators(b3ca7f6d-4dba-4e22-ae42-f4184932fba2)\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" podUID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.877845 4909 generic.go:334] "Generic (PLEG): container finished" podID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" containerID="bd3ffcc10e90834af6436a49dd06e7bafbec5df26fe4e03b67a0337fab8666fc" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.877884 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" event={"ID":"8c9c6404-9f47-434c-ac1b-d08cd48d5156","Type":"ContainerDied","Data":"bd3ffcc10e90834af6436a49dd06e7bafbec5df26fe4e03b67a0337fab8666fc"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.878115 4909 scope.go:117] "RemoveContainer" containerID="bd3ffcc10e90834af6436a49dd06e7bafbec5df26fe4e03b67a0337fab8666fc" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.878267 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-54485f899-8486p_openstack-operators(8c9c6404-9f47-434c-ac1b-d08cd48d5156)\"" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" podUID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.879855 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f41a032-71ff-4608-aa2c-b16469fe55a0" containerID="f1af1c8b05f92459818d7d97a278aa200bcf8271932c65434af2871a74d105de" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.879895 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerDied","Data":"f1af1c8b05f92459818d7d97a278aa200bcf8271932c65434af2871a74d105de"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.880123 4909 scope.go:117] "RemoveContainer" containerID="f1af1c8b05f92459818d7d97a278aa200bcf8271932c65434af2871a74d105de" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.880280 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-646fd589f9-phclr_openstack-operators(9f41a032-71ff-4608-aa2c-b16469fe55a0)\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" podUID="9f41a032-71ff-4608-aa2c-b16469fe55a0" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.882139 4909 generic.go:334] "Generic (PLEG): container finished" podID="757566f7-a07b-4623-8668-b39f715ea7a9" containerID="cb8f7b33bfb61a20cf5ee2459c159a657244e4f720651db26ecd7953a49c48ef" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.882188 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" event={"ID":"757566f7-a07b-4623-8668-b39f715ea7a9","Type":"ContainerDied","Data":"cb8f7b33bfb61a20cf5ee2459c159a657244e4f720651db26ecd7953a49c48ef"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.882411 4909 scope.go:117] "RemoveContainer" containerID="cb8f7b33bfb61a20cf5ee2459c159a657244e4f720651db26ecd7953a49c48ef" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.882604 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-7d6f5d799-4gr4q_openstack-operators(757566f7-a07b-4623-8668-b39f715ea7a9)\"" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" podUID="757566f7-a07b-4623-8668-b39f715ea7a9" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.884452 4909 generic.go:334] "Generic (PLEG): container finished" podID="0ebad6d0-e522-4012-869e-903c89bd1703" containerID="c97e8b1cfca46fb2d719b956cca1ea40667b633faf755e49609cc5494564bf46" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.884491 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" event={"ID":"0ebad6d0-e522-4012-869e-903c89bd1703","Type":"ContainerDied","Data":"c97e8b1cfca46fb2d719b956cca1ea40667b633faf755e49609cc5494564bf46"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.884738 4909 scope.go:117] "RemoveContainer" containerID="c97e8b1cfca46fb2d719b956cca1ea40667b633faf755e49609cc5494564bf46" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.884927 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-7d5d9fd47f-sphql_openstack-operators(0ebad6d0-e522-4012-869e-903c89bd1703)\"" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" podUID="0ebad6d0-e522-4012-869e-903c89bd1703" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.885156 4909 scope.go:117] "RemoveContainer" containerID="80fc53263aa7e40af1c8dcd3f54d8e8ed14e57962477c3678d61494efa132dbd" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.889944 4909 generic.go:334] "Generic (PLEG): container finished" podID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" containerID="407342b53828599091eee6f806cc95bfae1eb8bedb6d5b23c4b75475d569cbf7" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.889985 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" event={"ID":"20a1b8f0-7e93-4d4a-b527-7470d128a2bc","Type":"ContainerDied","Data":"407342b53828599091eee6f806cc95bfae1eb8bedb6d5b23c4b75475d569cbf7"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.890278 4909 scope.go:117] "RemoveContainer" containerID="407342b53828599091eee6f806cc95bfae1eb8bedb6d5b23c4b75475d569cbf7" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.890461 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-5f97d8c699-w69tb_openstack-operators(20a1b8f0-7e93-4d4a-b527-7470d128a2bc)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" podUID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.892137 4909 generic.go:334] "Generic (PLEG): container finished" podID="10e6987e-11d4-4c64-bc26-bb45590f3fff" containerID="38c8800ab7720866cc65609da62060129d0c4d094d0da404966a84a32fa4aa31" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.892173 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" event={"ID":"10e6987e-11d4-4c64-bc26-bb45590f3fff","Type":"ContainerDied","Data":"38c8800ab7720866cc65609da62060129d0c4d094d0da404966a84a32fa4aa31"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.892428 4909 scope.go:117] "RemoveContainer" containerID="38c8800ab7720866cc65609da62060129d0c4d094d0da404966a84a32fa4aa31" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.905070 4909 generic.go:334] "Generic (PLEG): container finished" podID="138eaa02-be79-4e16-8627-cc582d5b6770" containerID="2dc9c82f9ef3462d3cd37273e34fc4bab4fd6602fe6b6d12fb45224716256fec" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.905166 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" event={"ID":"138eaa02-be79-4e16-8627-cc582d5b6770","Type":"ContainerDied","Data":"2dc9c82f9ef3462d3cd37273e34fc4bab4fd6602fe6b6d12fb45224716256fec"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.905683 4909 scope.go:117] "RemoveContainer" containerID="2dc9c82f9ef3462d3cd37273e34fc4bab4fd6602fe6b6d12fb45224716256fec" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.905901 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-748967c98-2x9sp_openstack-operators(138eaa02-be79-4e16-8627-cc582d5b6770)\"" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" podUID="138eaa02-be79-4e16-8627-cc582d5b6770" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.907294 4909 generic.go:334] "Generic (PLEG): container finished" podID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" containerID="cd27919792f3030a6a04f85adb2e8b5fcd9101798b4d76c73155f3cf47c86a39" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.907344 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" event={"ID":"4a162aeb-8377-45aa-bd44-6b8aed2f93fb","Type":"ContainerDied","Data":"cd27919792f3030a6a04f85adb2e8b5fcd9101798b4d76c73155f3cf47c86a39"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.907642 4909 scope.go:117] "RemoveContainer" containerID="cd27919792f3030a6a04f85adb2e8b5fcd9101798b4d76c73155f3cf47c86a39" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.911048 4909 generic.go:334] "Generic (PLEG): container finished" podID="af4a09dd-04e0-465d-a817-bacf1a52babe" containerID="00244fbe768c0bbb30adff3ba5e8722a86cc361cdf04193c7bb27635317b5c78" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.911090 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" event={"ID":"af4a09dd-04e0-465d-a817-bacf1a52babe","Type":"ContainerDied","Data":"00244fbe768c0bbb30adff3ba5e8722a86cc361cdf04193c7bb27635317b5c78"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.911813 4909 scope.go:117] "RemoveContainer" containerID="00244fbe768c0bbb30adff3ba5e8722a86cc361cdf04193c7bb27635317b5c78" Nov 26 07:24:40 crc kubenswrapper[4909]: E1126 07:24:40.912139 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-6b6c55ffd5-dhp84_openstack-operators(af4a09dd-04e0-465d-a817-bacf1a52babe)\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" podUID="af4a09dd-04e0-465d-a817-bacf1a52babe" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.913380 4909 generic.go:334] "Generic (PLEG): container finished" podID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" containerID="33910e2e3992da9ec90e83cd704c37836f1b3d4f39dad9abeeee8bf3c0a67373" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.913448 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" event={"ID":"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef","Type":"ContainerDied","Data":"33910e2e3992da9ec90e83cd704c37836f1b3d4f39dad9abeeee8bf3c0a67373"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.914169 4909 scope.go:117] "RemoveContainer" containerID="33910e2e3992da9ec90e83cd704c37836f1b3d4f39dad9abeeee8bf3c0a67373" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.916786 4909 generic.go:334] "Generic (PLEG): container finished" podID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" containerID="fbcd16f049222efa2b4c9f8bf3f134bb2ad678d3f8e128aa18e1d7f8322cc466" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.916838 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" event={"ID":"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c","Type":"ContainerDied","Data":"fbcd16f049222efa2b4c9f8bf3f134bb2ad678d3f8e128aa18e1d7f8322cc466"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.917218 4909 scope.go:117] "RemoveContainer" containerID="fbcd16f049222efa2b4c9f8bf3f134bb2ad678d3f8e128aa18e1d7f8322cc466" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.919432 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" event={"ID":"365248fc-0b34-46df-bbdc-043f89694812","Type":"ContainerStarted","Data":"a3091d0abbe2b1154d89f6b18b0d571181e546573719adf2550186860db411f8"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.919817 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.922273 4909 generic.go:334] "Generic (PLEG): container finished" podID="f7f77917-da54-4e82-a356-80000a53395a" containerID="cce41246bda2a4d3cad65ddff80d5ff5faa3e06b26e9cfbf41081fb38b49d0b6" exitCode=1 Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.922349 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" event={"ID":"f7f77917-da54-4e82-a356-80000a53395a","Type":"ContainerDied","Data":"cce41246bda2a4d3cad65ddff80d5ff5faa3e06b26e9cfbf41081fb38b49d0b6"} Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.923281 4909 scope.go:117] "RemoveContainer" containerID="cce41246bda2a4d3cad65ddff80d5ff5faa3e06b26e9cfbf41081fb38b49d0b6" Nov 26 07:24:40 crc kubenswrapper[4909]: I1126 07:24:40.965113 4909 scope.go:117] "RemoveContainer" containerID="5cc094590d1ef22f9ab1f460dcda65bde788d8fb0fad9d35cac512b326d5e61a" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.041804 4909 scope.go:117] "RemoveContainer" containerID="c7a8b1902520ca416dbc1a1302a978f3619802384f635376b343e714d0c5fa4d" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.088752 4909 scope.go:117] "RemoveContainer" containerID="e550c2419b5af6d03322135ecf4f934e6214479b59cdbfb10e48825d20a9314c" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.121739 4909 scope.go:117] "RemoveContainer" containerID="fb7999a7f7ff3cd8b133f4854000a9b97fe779663bdbcc7766d0e10a64451e17" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.146651 4909 scope.go:117] "RemoveContainer" containerID="ac1a2edc25071651334d0ffbc1843b636e077a8204acf9400bfc1803e4395a58" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.179527 4909 scope.go:117] "RemoveContainer" containerID="d8e79311312214891d8e4375b26e5a78f6dcfccc9c7f94a137bb0e3d16cb2b98" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.199424 4909 scope.go:117] "RemoveContainer" containerID="f252468407ffea2990ccc044949fabec74a0e6724982ea1fc7ab61ae0f7bdaf7" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.221787 4909 scope.go:117] "RemoveContainer" containerID="79c93578600ecd3758192b8bf7324e13d17a763ba661c201ae2a6a523c5c904a" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.249663 4909 scope.go:117] "RemoveContainer" containerID="fd4c4b86e3f3f86cb067ee781caf02d5c897cf7cbba236d11652b50a8feacdc5" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.279685 4909 scope.go:117] "RemoveContainer" containerID="bc1d125ffc63dafd4f3c4861ea058ef2d347c94047260544e67516e6c7b32347" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.745063 4909 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.933201 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.937009 4909 generic.go:334] "Generic (PLEG): container finished" podID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" containerID="56ebc0c15c90d64e49ad7319712fb589c87dfa4655684e9102faa885a9c20e2c" exitCode=1 Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.937060 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" event={"ID":"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef","Type":"ContainerDied","Data":"56ebc0c15c90d64e49ad7319712fb589c87dfa4655684e9102faa885a9c20e2c"} Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.937157 4909 scope.go:117] "RemoveContainer" containerID="33910e2e3992da9ec90e83cd704c37836f1b3d4f39dad9abeeee8bf3c0a67373" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.937688 4909 scope.go:117] "RemoveContainer" containerID="56ebc0c15c90d64e49ad7319712fb589c87dfa4655684e9102faa885a9c20e2c" Nov 26 07:24:41 crc kubenswrapper[4909]: E1126 07:24:41.937889 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-64d7c556cd-872rr_openstack-operators(cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef)\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.939536 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.940858 4909 generic.go:334] "Generic (PLEG): container finished" podID="10e6987e-11d4-4c64-bc26-bb45590f3fff" containerID="c1ad00ba3622d62a15b07e1eb86c9b1a64cf5bbb4c09a875dff3539d82441f89" exitCode=1 Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.940940 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" event={"ID":"10e6987e-11d4-4c64-bc26-bb45590f3fff","Type":"ContainerDied","Data":"c1ad00ba3622d62a15b07e1eb86c9b1a64cf5bbb4c09a875dff3539d82441f89"} Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.941507 4909 scope.go:117] "RemoveContainer" containerID="c1ad00ba3622d62a15b07e1eb86c9b1a64cf5bbb4c09a875dff3539d82441f89" Nov 26 07:24:41 crc kubenswrapper[4909]: E1126 07:24:41.941768 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-867d87977b-5t9sx_openstack-operators(10e6987e-11d4-4c64-bc26-bb45590f3fff)\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.943607 4909 generic.go:334] "Generic (PLEG): container finished" podID="cad0b373-54da-4331-aa01-27d08edaa1ef" containerID="dead8b7e9834ca28e8c1aa52ec89f6bb684f45f6199fe905ea9ab11bb8c8ae3b" exitCode=1 Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.943634 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" event={"ID":"cad0b373-54da-4331-aa01-27d08edaa1ef","Type":"ContainerDied","Data":"dead8b7e9834ca28e8c1aa52ec89f6bb684f45f6199fe905ea9ab11bb8c8ae3b"} Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.943952 4909 scope.go:117] "RemoveContainer" containerID="dead8b7e9834ca28e8c1aa52ec89f6bb684f45f6199fe905ea9ab11bb8c8ae3b" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.956963 4909 generic.go:334] "Generic (PLEG): container finished" podID="f7f77917-da54-4e82-a356-80000a53395a" containerID="3c96b7a91f73c9cb8cf8092250fe91c63dbd4554991f79f360da82fd79f86748" exitCode=1 Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.957028 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" event={"ID":"f7f77917-da54-4e82-a356-80000a53395a","Type":"ContainerDied","Data":"3c96b7a91f73c9cb8cf8092250fe91c63dbd4554991f79f360da82fd79f86748"} Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.957574 4909 scope.go:117] "RemoveContainer" containerID="3c96b7a91f73c9cb8cf8092250fe91c63dbd4554991f79f360da82fd79f86748" Nov 26 07:24:41 crc kubenswrapper[4909]: E1126 07:24:41.957898 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-5bfbbb859d-2cwgh_openstack-operators(f7f77917-da54-4e82-a356-80000a53395a)\"" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" podUID="f7f77917-da54-4e82-a356-80000a53395a" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.965715 4909 generic.go:334] "Generic (PLEG): container finished" podID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" containerID="c776eb9eb4ab8b9b3add0bcaab548f59d41d097d83cec5fde25f6c99843ba162" exitCode=1 Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.965772 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" event={"ID":"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4","Type":"ContainerDied","Data":"c776eb9eb4ab8b9b3add0bcaab548f59d41d097d83cec5fde25f6c99843ba162"} Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.972056 4909 scope.go:117] "RemoveContainer" containerID="c776eb9eb4ab8b9b3add0bcaab548f59d41d097d83cec5fde25f6c99843ba162" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.975278 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.979030 4909 scope.go:117] "RemoveContainer" containerID="2dc9c82f9ef3462d3cd37273e34fc4bab4fd6602fe6b6d12fb45224716256fec" Nov 26 07:24:41 crc kubenswrapper[4909]: E1126 07:24:41.979253 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-748967c98-2x9sp_openstack-operators(138eaa02-be79-4e16-8627-cc582d5b6770)\"" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" podUID="138eaa02-be79-4e16-8627-cc582d5b6770" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.984540 4909 scope.go:117] "RemoveContainer" containerID="38c8800ab7720866cc65609da62060129d0c4d094d0da404966a84a32fa4aa31" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.985502 4909 generic.go:334] "Generic (PLEG): container finished" podID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" containerID="bf72c72329779296739bb26f0ab0ed629d06e2935b1e7a1e2543ad964fd70068" exitCode=1 Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.985549 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" event={"ID":"4a162aeb-8377-45aa-bd44-6b8aed2f93fb","Type":"ContainerDied","Data":"bf72c72329779296739bb26f0ab0ed629d06e2935b1e7a1e2543ad964fd70068"} Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.985926 4909 scope.go:117] "RemoveContainer" containerID="bf72c72329779296739bb26f0ab0ed629d06e2935b1e7a1e2543ad964fd70068" Nov 26 07:24:41 crc kubenswrapper[4909]: E1126 07:24:41.986124 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79d658b66d-swdlm_openstack-operators(4a162aeb-8377-45aa-bd44-6b8aed2f93fb)\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" podUID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.989435 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.990562 4909 generic.go:334] "Generic (PLEG): container finished" podID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" containerID="ee8d1f2d4ed9b13983d75a1ba0dfde64ce89cdcc842927228a623068442fde74" exitCode=1 Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.990614 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" event={"ID":"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c","Type":"ContainerDied","Data":"ee8d1f2d4ed9b13983d75a1ba0dfde64ce89cdcc842927228a623068442fde74"} Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.990963 4909 scope.go:117] "RemoveContainer" containerID="ee8d1f2d4ed9b13983d75a1ba0dfde64ce89cdcc842927228a623068442fde74" Nov 26 07:24:41 crc kubenswrapper[4909]: E1126 07:24:41.991180 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-68c78b6ff8-dmnlq_openstack-operators(fea4eb2c-ad33-4504-a4e4-8c82875b2d0c)\"" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" podUID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" Nov 26 07:24:41 crc kubenswrapper[4909]: I1126 07:24:41.993687 4909 scope.go:117] "RemoveContainer" containerID="ab73a6dfe2de6e6ed948050bd837e445392bd72d77314d1a71b0f211e84241d8" Nov 26 07:24:41 crc kubenswrapper[4909]: E1126 07:24:41.993852 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-6788cc6d75-scqbd_openstack-operators(b3ca7f6d-4dba-4e22-ae42-f4184932fba2)\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" podUID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.001342 4909 scope.go:117] "RemoveContainer" containerID="1e80c5f888565e4393bdf4da78236f3fb1563e2468647edf16843f0be24c0ddb" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.001412 4909 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.001430 4909 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.001576 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-6bd966bbd4-6j4kw_openstack-operators(cd83d237-7922-4458-9fce-8c296d0ccc0f)\"" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" podUID="cd83d237-7922-4458-9fce-8c296d0ccc0f" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.005323 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.005401 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.005751 4909 scope.go:117] "RemoveContainer" containerID="64fd166960b86103c5f24488aab8af56cafe6527f188efe85add4a0ba1fe7ca4" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.005935 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-698d6fd7d6-692sc_openstack-operators(f4c87de0-5b1c-44f8-a2fb-1949a3f4af03)\"" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" podUID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.053850 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.054371 4909 scope.go:117] "RemoveContainer" containerID="c97e8b1cfca46fb2d719b956cca1ea40667b633faf755e49609cc5494564bf46" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.054890 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-7d5d9fd47f-sphql_openstack-operators(0ebad6d0-e522-4012-869e-903c89bd1703)\"" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" podUID="0ebad6d0-e522-4012-869e-903c89bd1703" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.098887 4909 scope.go:117] "RemoveContainer" containerID="cce41246bda2a4d3cad65ddff80d5ff5faa3e06b26e9cfbf41081fb38b49d0b6" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.131734 4909 scope.go:117] "RemoveContainer" containerID="cd27919792f3030a6a04f85adb2e8b5fcd9101798b4d76c73155f3cf47c86a39" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.152575 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.153193 4909 scope.go:117] "RemoveContainer" containerID="cb8f7b33bfb61a20cf5ee2459c159a657244e4f720651db26ecd7953a49c48ef" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.153490 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-7d6f5d799-4gr4q_openstack-operators(757566f7-a07b-4623-8668-b39f715ea7a9)\"" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" podUID="757566f7-a07b-4623-8668-b39f715ea7a9" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.155314 4909 scope.go:117] "RemoveContainer" containerID="fbcd16f049222efa2b4c9f8bf3f134bb2ad678d3f8e128aa18e1d7f8322cc466" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.314839 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.315370 4909 scope.go:117] "RemoveContainer" containerID="f1af1c8b05f92459818d7d97a278aa200bcf8271932c65434af2871a74d105de" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.315574 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-646fd589f9-phclr_openstack-operators(9f41a032-71ff-4608-aa2c-b16469fe55a0)\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" podUID="9f41a032-71ff-4608-aa2c-b16469fe55a0" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.353544 4909 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="79051699-f697-4312-9573-4aa0b4d075b3" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.384875 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.398932 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.399491 4909 scope.go:117] "RemoveContainer" containerID="bd3ffcc10e90834af6436a49dd06e7bafbec5df26fe4e03b67a0337fab8666fc" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.399739 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-54485f899-8486p_openstack-operators(8c9c6404-9f47-434c-ac1b-d08cd48d5156)\"" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" podUID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.410394 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.410972 4909 scope.go:117] "RemoveContainer" containerID="0549ff9b555e8e81a4f23f1dcf94be262695608d9edb908bbc598a4181f19204" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.411166 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-7979c68bc7-c696l_openstack-operators(61289245-0b12-4689-8a98-2b24544cacf8)\"" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" podUID="61289245-0b12-4689-8a98-2b24544cacf8" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.465625 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.486782 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.556355 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.557044 4909 scope.go:117] "RemoveContainer" containerID="ec934b246a7b768f7d74a7fdef300a7c00eedb3a9eb3304a0eecbe6905f071c5" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.557283 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-58487d9bf4-7rjcs_openstack-operators(f8afd5eb-02e8-4a94-be0d-19a709270945)\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" podUID="f8afd5eb-02e8-4a94-be0d-19a709270945" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.616937 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.633437 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.634118 4909 scope.go:117] "RemoveContainer" containerID="00244fbe768c0bbb30adff3ba5e8722a86cc361cdf04193c7bb27635317b5c78" Nov 26 07:24:42 crc kubenswrapper[4909]: E1126 07:24:42.634388 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-6b6c55ffd5-dhp84_openstack-operators(af4a09dd-04e0-465d-a817-bacf1a52babe)\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" podUID="af4a09dd-04e0-465d-a817-bacf1a52babe" Nov 26 07:24:42 crc kubenswrapper[4909]: I1126 07:24:42.667263 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.016412 4909 generic.go:334] "Generic (PLEG): container finished" podID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" containerID="f6138a1f55b88f5c1790a98c203b524cb85b7e08912bbb77286fea76f6732691" exitCode=1 Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.016492 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" event={"ID":"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4","Type":"ContainerDied","Data":"f6138a1f55b88f5c1790a98c203b524cb85b7e08912bbb77286fea76f6732691"} Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.017158 4909 scope.go:117] "RemoveContainer" containerID="f6138a1f55b88f5c1790a98c203b524cb85b7e08912bbb77286fea76f6732691" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.017291 4909 scope.go:117] "RemoveContainer" containerID="c776eb9eb4ab8b9b3add0bcaab548f59d41d097d83cec5fde25f6c99843ba162" Nov 26 07:24:43 crc kubenswrapper[4909]: E1126 07:24:43.017615 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-577c5f6d94-d44wm_openstack-operators(ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4)\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.021743 4909 scope.go:117] "RemoveContainer" containerID="bf72c72329779296739bb26f0ab0ed629d06e2935b1e7a1e2543ad964fd70068" Nov 26 07:24:43 crc kubenswrapper[4909]: E1126 07:24:43.022213 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79d658b66d-swdlm_openstack-operators(4a162aeb-8377-45aa-bd44-6b8aed2f93fb)\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" podUID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.030630 4909 generic.go:334] "Generic (PLEG): container finished" podID="cad0b373-54da-4331-aa01-27d08edaa1ef" containerID="c1e064cfab488367ef03507abc5fcd093286ba98197b94e60836635ebd37ab9d" exitCode=1 Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.030761 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" event={"ID":"cad0b373-54da-4331-aa01-27d08edaa1ef","Type":"ContainerDied","Data":"c1e064cfab488367ef03507abc5fcd093286ba98197b94e60836635ebd37ab9d"} Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.031705 4909 scope.go:117] "RemoveContainer" containerID="c1e064cfab488367ef03507abc5fcd093286ba98197b94e60836635ebd37ab9d" Nov 26 07:24:43 crc kubenswrapper[4909]: E1126 07:24:43.033224 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-5b67cfc8fb-xcrzl_openstack-operators(cad0b373-54da-4331-aa01-27d08edaa1ef)\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.044739 4909 scope.go:117] "RemoveContainer" containerID="3c96b7a91f73c9cb8cf8092250fe91c63dbd4554991f79f360da82fd79f86748" Nov 26 07:24:43 crc kubenswrapper[4909]: E1126 07:24:43.045036 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-5bfbbb859d-2cwgh_openstack-operators(f7f77917-da54-4e82-a356-80000a53395a)\"" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" podUID="f7f77917-da54-4e82-a356-80000a53395a" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.052624 4909 scope.go:117] "RemoveContainer" containerID="56ebc0c15c90d64e49ad7319712fb589c87dfa4655684e9102faa885a9c20e2c" Nov 26 07:24:43 crc kubenswrapper[4909]: E1126 07:24:43.052861 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-64d7c556cd-872rr_openstack-operators(cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef)\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.054268 4909 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.054294 4909 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.054290 4909 scope.go:117] "RemoveContainer" containerID="c1ad00ba3622d62a15b07e1eb86c9b1a64cf5bbb4c09a875dff3539d82441f89" Nov 26 07:24:43 crc kubenswrapper[4909]: E1126 07:24:43.054667 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-867d87977b-5t9sx_openstack-operators(10e6987e-11d4-4c64-bc26-bb45590f3fff)\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.073776 4909 scope.go:117] "RemoveContainer" containerID="dead8b7e9834ca28e8c1aa52ec89f6bb684f45f6199fe905ea9ab11bb8c8ae3b" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.402649 4909 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="79051699-f697-4312-9573-4aa0b4d075b3" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.419998 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.420971 4909 scope.go:117] "RemoveContainer" containerID="ee8d1f2d4ed9b13983d75a1ba0dfde64ce89cdcc842927228a623068442fde74" Nov 26 07:24:43 crc kubenswrapper[4909]: E1126 07:24:43.421506 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-68c78b6ff8-dmnlq_openstack-operators(fea4eb2c-ad33-4504-a4e4-8c82875b2d0c)\"" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" podUID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.499512 4909 scope.go:117] "RemoveContainer" containerID="9b4d011e536fc46d2cb5d3846c98b802e172f0184e45ea0504ce2fd123dfb7ca" Nov 26 07:24:43 crc kubenswrapper[4909]: I1126 07:24:43.946768 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" podUID="b68371f8-f38e-44e5-bd68-d059f1e3e89a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.070444 4909 generic.go:334] "Generic (PLEG): container finished" podID="b68371f8-f38e-44e5-bd68-d059f1e3e89a" containerID="cd38a99f92bc1ea46612d73d030c2dc68d5d425136395d9f7c844194b70d0b7c" exitCode=1 Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.070551 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" event={"ID":"b68371f8-f38e-44e5-bd68-d059f1e3e89a","Type":"ContainerDied","Data":"cd38a99f92bc1ea46612d73d030c2dc68d5d425136395d9f7c844194b70d0b7c"} Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.071194 4909 scope.go:117] "RemoveContainer" containerID="cd38a99f92bc1ea46612d73d030c2dc68d5d425136395d9f7c844194b70d0b7c" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.074553 4909 generic.go:334] "Generic (PLEG): container finished" podID="5b985112-f6b3-4879-b02e-8ac0e510730b" containerID="ce221dd83457649a5f28dcd0fcc35b3a63fafc0eee1cfe01f7b7e7a20fb689dd" exitCode=1 Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.074581 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" event={"ID":"5b985112-f6b3-4879-b02e-8ac0e510730b","Type":"ContainerDied","Data":"ce221dd83457649a5f28dcd0fcc35b3a63fafc0eee1cfe01f7b7e7a20fb689dd"} Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.075032 4909 scope.go:117] "RemoveContainer" containerID="ce221dd83457649a5f28dcd0fcc35b3a63fafc0eee1cfe01f7b7e7a20fb689dd" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.086262 4909 generic.go:334] "Generic (PLEG): container finished" podID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" containerID="0ebf42b05e16b6ffbd9580b90f66273bb86f08a71e9e3cec7d1356bf922df906" exitCode=1 Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.086363 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" event={"ID":"8ace07e4-e65b-451c-8623-f71b4f7d4f14","Type":"ContainerDied","Data":"0ebf42b05e16b6ffbd9580b90f66273bb86f08a71e9e3cec7d1356bf922df906"} Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.086411 4909 scope.go:117] "RemoveContainer" containerID="9b4d011e536fc46d2cb5d3846c98b802e172f0184e45ea0504ce2fd123dfb7ca" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.087132 4909 scope.go:117] "RemoveContainer" containerID="0ebf42b05e16b6ffbd9580b90f66273bb86f08a71e9e3cec7d1356bf922df906" Nov 26 07:24:44 crc kubenswrapper[4909]: E1126 07:24:44.087409 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-58dcdd989d-ctkx2_metallb-system(8ace07e4-e65b-451c-8623-f71b4f7d4f14)\"" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.092823 4909 generic.go:334] "Generic (PLEG): container finished" podID="dd0d0446-c640-42e7-9ff6-e71e59e4a459" containerID="e702e5ed145f6afda1182310485b2ce992e5e685fe432b41ddc1a904abe6bfc4" exitCode=1 Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.093034 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" event={"ID":"dd0d0446-c640-42e7-9ff6-e71e59e4a459","Type":"ContainerDied","Data":"e702e5ed145f6afda1182310485b2ce992e5e685fe432b41ddc1a904abe6bfc4"} Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.093894 4909 scope.go:117] "RemoveContainer" containerID="e702e5ed145f6afda1182310485b2ce992e5e685fe432b41ddc1a904abe6bfc4" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.100524 4909 generic.go:334] "Generic (PLEG): container finished" podID="0f99fe6f-9209-4c74-9bcb-619212d7812e" containerID="aff179b7fdfc67333a63677b6ff434b58e2d436093be19cb0786cbfec335676d" exitCode=1 Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.100636 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" event={"ID":"0f99fe6f-9209-4c74-9bcb-619212d7812e","Type":"ContainerDied","Data":"aff179b7fdfc67333a63677b6ff434b58e2d436093be19cb0786cbfec335676d"} Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.101564 4909 scope.go:117] "RemoveContainer" containerID="aff179b7fdfc67333a63677b6ff434b58e2d436093be19cb0786cbfec335676d" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.106644 4909 scope.go:117] "RemoveContainer" containerID="f6138a1f55b88f5c1790a98c203b524cb85b7e08912bbb77286fea76f6732691" Nov 26 07:24:44 crc kubenswrapper[4909]: E1126 07:24:44.106990 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-577c5f6d94-d44wm_openstack-operators(ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4)\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:24:44 crc kubenswrapper[4909]: I1126 07:24:44.113302 4909 scope.go:117] "RemoveContainer" containerID="c1e064cfab488367ef03507abc5fcd093286ba98197b94e60836635ebd37ab9d" Nov 26 07:24:44 crc kubenswrapper[4909]: E1126 07:24:44.113530 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-5b67cfc8fb-xcrzl_openstack-operators(cad0b373-54da-4331-aa01-27d08edaa1ef)\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.126110 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" event={"ID":"dd0d0446-c640-42e7-9ff6-e71e59e4a459","Type":"ContainerStarted","Data":"13ef7f2ad9b12c50190040dcc1e40414f255ad367d050f67ab51414f81082e88"} Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.126935 4909 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" containerID="cri-o://e702e5ed145f6afda1182310485b2ce992e5e685fe432b41ddc1a904abe6bfc4" Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.126949 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.130340 4909 generic.go:334] "Generic (PLEG): container finished" podID="0f99fe6f-9209-4c74-9bcb-619212d7812e" containerID="46550aa01be1e94397316e0aa3bdac273e255140e5a727a3cc26a4e6b6c20b30" exitCode=1 Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.130381 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" event={"ID":"0f99fe6f-9209-4c74-9bcb-619212d7812e","Type":"ContainerDied","Data":"46550aa01be1e94397316e0aa3bdac273e255140e5a727a3cc26a4e6b6c20b30"} Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.130446 4909 scope.go:117] "RemoveContainer" containerID="aff179b7fdfc67333a63677b6ff434b58e2d436093be19cb0786cbfec335676d" Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.131178 4909 scope.go:117] "RemoveContainer" containerID="46550aa01be1e94397316e0aa3bdac273e255140e5a727a3cc26a4e6b6c20b30" Nov 26 07:24:45 crc kubenswrapper[4909]: E1126 07:24:45.131513 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-6b56b8849f-fd6dq_openstack-operators(0f99fe6f-9209-4c74-9bcb-619212d7812e)\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" podUID="0f99fe6f-9209-4c74-9bcb-619212d7812e" Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.133150 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" event={"ID":"b68371f8-f38e-44e5-bd68-d059f1e3e89a","Type":"ContainerStarted","Data":"ece459f860a407c4bdaf5c57cb786294c9009d72d177cabfb0a5a3cb2956d513"} Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.133348 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.138036 4909 generic.go:334] "Generic (PLEG): container finished" podID="5b985112-f6b3-4879-b02e-8ac0e510730b" containerID="04974c57da082b27c1ecc3e323138bd5ea8565218f48c5fb3988d32bad90d303" exitCode=1 Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.138072 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" event={"ID":"5b985112-f6b3-4879-b02e-8ac0e510730b","Type":"ContainerDied","Data":"04974c57da082b27c1ecc3e323138bd5ea8565218f48c5fb3988d32bad90d303"} Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.138402 4909 scope.go:117] "RemoveContainer" containerID="04974c57da082b27c1ecc3e323138bd5ea8565218f48c5fb3988d32bad90d303" Nov 26 07:24:45 crc kubenswrapper[4909]: E1126 07:24:45.138614 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-cc9f5bc5c-kbwpk_openstack-operators(5b985112-f6b3-4879-b02e-8ac0e510730b)\"" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" podUID="5b985112-f6b3-4879-b02e-8ac0e510730b" Nov 26 07:24:45 crc kubenswrapper[4909]: I1126 07:24:45.173550 4909 scope.go:117] "RemoveContainer" containerID="ce221dd83457649a5f28dcd0fcc35b3a63fafc0eee1cfe01f7b7e7a20fb689dd" Nov 26 07:24:46 crc kubenswrapper[4909]: I1126 07:24:46.153506 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:24:50 crc kubenswrapper[4909]: I1126 07:24:50.531398 4909 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 26 07:24:50 crc kubenswrapper[4909]: I1126 07:24:50.531779 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.142558 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.508979 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.698820 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.759526 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.934163 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.934219 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.934694 4909 scope.go:117] "RemoveContainer" containerID="3c96b7a91f73c9cb8cf8092250fe91c63dbd4554991f79f360da82fd79f86748" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.939717 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.940805 4909 scope.go:117] "RemoveContainer" containerID="2dc9c82f9ef3462d3cd37273e34fc4bab4fd6602fe6b6d12fb45224716256fec" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.976347 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.977191 4909 scope.go:117] "RemoveContainer" containerID="ab73a6dfe2de6e6ed948050bd837e445392bd72d77314d1a71b0f211e84241d8" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.989512 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:24:51 crc kubenswrapper[4909]: I1126 07:24:51.990213 4909 scope.go:117] "RemoveContainer" containerID="1e80c5f888565e4393bdf4da78236f3fb1563e2468647edf16843f0be24c0ddb" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.005009 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.005724 4909 scope.go:117] "RemoveContainer" containerID="64fd166960b86103c5f24488aab8af56cafe6527f188efe85add4a0ba1fe7ca4" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.054215 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.055152 4909 scope.go:117] "RemoveContainer" containerID="c97e8b1cfca46fb2d719b956cca1ea40667b633faf755e49609cc5494564bf46" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.151194 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.151700 4909 scope.go:117] "RemoveContainer" containerID="cb8f7b33bfb61a20cf5ee2459c159a657244e4f720651db26ecd7953a49c48ef" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.160514 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.314381 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.315311 4909 scope.go:117] "RemoveContainer" containerID="f1af1c8b05f92459818d7d97a278aa200bcf8271932c65434af2871a74d105de" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.385266 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.385879 4909 scope.go:117] "RemoveContainer" containerID="bf72c72329779296739bb26f0ab0ed629d06e2935b1e7a1e2543ad964fd70068" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.398945 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.399880 4909 scope.go:117] "RemoveContainer" containerID="bd3ffcc10e90834af6436a49dd06e7bafbec5df26fe4e03b67a0337fab8666fc" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.412200 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.413787 4909 scope.go:117] "RemoveContainer" containerID="0549ff9b555e8e81a4f23f1dcf94be262695608d9edb908bbc598a4181f19204" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.465751 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.465811 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.466505 4909 scope.go:117] "RemoveContainer" containerID="c1e064cfab488367ef03507abc5fcd093286ba98197b94e60836635ebd37ab9d" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.487417 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.488081 4909 scope.go:117] "RemoveContainer" containerID="c1ad00ba3622d62a15b07e1eb86c9b1a64cf5bbb4c09a875dff3539d82441f89" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.505998 4909 generic.go:334] "Generic (PLEG): container finished" podID="0ebad6d0-e522-4012-869e-903c89bd1703" containerID="ee38ad77f2fd0b5de89773335f2948ab1663d545529246f231c7f2b572616ad6" exitCode=1 Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.508328 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" event={"ID":"0ebad6d0-e522-4012-869e-903c89bd1703","Type":"ContainerDied","Data":"ee38ad77f2fd0b5de89773335f2948ab1663d545529246f231c7f2b572616ad6"} Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.508386 4909 scope.go:117] "RemoveContainer" containerID="c97e8b1cfca46fb2d719b956cca1ea40667b633faf755e49609cc5494564bf46" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.509504 4909 scope.go:117] "RemoveContainer" containerID="ee38ad77f2fd0b5de89773335f2948ab1663d545529246f231c7f2b572616ad6" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.509498 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" containerID="bdb7bdfdf8453c4604d534f78f14057e315e0f171be247f50aea612c86a1bace" exitCode=1 Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.509524 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" event={"ID":"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03","Type":"ContainerDied","Data":"bdb7bdfdf8453c4604d534f78f14057e315e0f171be247f50aea612c86a1bace"} Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.510385 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-7d5d9fd47f-sphql_openstack-operators(0ebad6d0-e522-4012-869e-903c89bd1703)\"" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" podUID="0ebad6d0-e522-4012-869e-903c89bd1703" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.514393 4909 scope.go:117] "RemoveContainer" containerID="bdb7bdfdf8453c4604d534f78f14057e315e0f171be247f50aea612c86a1bace" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.514725 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-698d6fd7d6-692sc_openstack-operators(f4c87de0-5b1c-44f8-a2fb-1949a3f4af03)\"" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" podUID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.516718 4909 generic.go:334] "Generic (PLEG): container finished" podID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" containerID="33b2601a1d9c142e8adf599e2031bb8481164a7412550db0ae1a3bc918ea7365" exitCode=1 Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.516781 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" event={"ID":"b3ca7f6d-4dba-4e22-ae42-f4184932fba2","Type":"ContainerDied","Data":"33b2601a1d9c142e8adf599e2031bb8481164a7412550db0ae1a3bc918ea7365"} Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.517274 4909 scope.go:117] "RemoveContainer" containerID="33b2601a1d9c142e8adf599e2031bb8481164a7412550db0ae1a3bc918ea7365" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.517641 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-6788cc6d75-scqbd_openstack-operators(b3ca7f6d-4dba-4e22-ae42-f4184932fba2)\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" podUID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.522274 4909 generic.go:334] "Generic (PLEG): container finished" podID="cd83d237-7922-4458-9fce-8c296d0ccc0f" containerID="962e1953cb5b37242bd0b7d60d3076702a548e009da0760b1f505453497e4d0f" exitCode=1 Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.522325 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" event={"ID":"cd83d237-7922-4458-9fce-8c296d0ccc0f","Type":"ContainerDied","Data":"962e1953cb5b37242bd0b7d60d3076702a548e009da0760b1f505453497e4d0f"} Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.523069 4909 scope.go:117] "RemoveContainer" containerID="962e1953cb5b37242bd0b7d60d3076702a548e009da0760b1f505453497e4d0f" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.523318 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-6bd966bbd4-6j4kw_openstack-operators(cd83d237-7922-4458-9fce-8c296d0ccc0f)\"" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" podUID="cd83d237-7922-4458-9fce-8c296d0ccc0f" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.523746 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.523784 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.524518 4909 scope.go:117] "RemoveContainer" containerID="04974c57da082b27c1ecc3e323138bd5ea8565218f48c5fb3988d32bad90d303" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.524828 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-cc9f5bc5c-kbwpk_openstack-operators(5b985112-f6b3-4879-b02e-8ac0e510730b)\"" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" podUID="5b985112-f6b3-4879-b02e-8ac0e510730b" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.528770 4909 generic.go:334] "Generic (PLEG): container finished" podID="f7f77917-da54-4e82-a356-80000a53395a" containerID="f922ef067b47cec85c1b029f70245aa42ddba85b2d40d007f291d582e521243d" exitCode=1 Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.528826 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" event={"ID":"f7f77917-da54-4e82-a356-80000a53395a","Type":"ContainerDied","Data":"f922ef067b47cec85c1b029f70245aa42ddba85b2d40d007f291d582e521243d"} Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.529288 4909 scope.go:117] "RemoveContainer" containerID="f922ef067b47cec85c1b029f70245aa42ddba85b2d40d007f291d582e521243d" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.529483 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-5bfbbb859d-2cwgh_openstack-operators(f7f77917-da54-4e82-a356-80000a53395a)\"" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" podUID="f7f77917-da54-4e82-a356-80000a53395a" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.532550 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerStarted","Data":"7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517"} Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.532753 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.544911 4909 generic.go:334] "Generic (PLEG): container finished" podID="138eaa02-be79-4e16-8627-cc582d5b6770" containerID="054e7f83fff6d0f887b532bf343d151476b11eba99e8ccc0d467e9518f561cea" exitCode=1 Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.544977 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" event={"ID":"138eaa02-be79-4e16-8627-cc582d5b6770","Type":"ContainerDied","Data":"054e7f83fff6d0f887b532bf343d151476b11eba99e8ccc0d467e9518f561cea"} Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.545789 4909 scope.go:117] "RemoveContainer" containerID="054e7f83fff6d0f887b532bf343d151476b11eba99e8ccc0d467e9518f561cea" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.546054 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-748967c98-2x9sp_openstack-operators(138eaa02-be79-4e16-8627-cc582d5b6770)\"" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" podUID="138eaa02-be79-4e16-8627-cc582d5b6770" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.556099 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.556914 4909 scope.go:117] "RemoveContainer" containerID="ec934b246a7b768f7d74a7fdef300a7c00eedb3a9eb3304a0eecbe6905f071c5" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.561066 4909 scope.go:117] "RemoveContainer" containerID="64fd166960b86103c5f24488aab8af56cafe6527f188efe85add4a0ba1fe7ca4" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.561577 4909 generic.go:334] "Generic (PLEG): container finished" podID="757566f7-a07b-4623-8668-b39f715ea7a9" containerID="d5508762a172daa14c2a1be67c8940b6b6feabb0f8f42fc0e2d7c2458e2f048e" exitCode=1 Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.561630 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" event={"ID":"757566f7-a07b-4623-8668-b39f715ea7a9","Type":"ContainerDied","Data":"d5508762a172daa14c2a1be67c8940b6b6feabb0f8f42fc0e2d7c2458e2f048e"} Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.562138 4909 scope.go:117] "RemoveContainer" containerID="d5508762a172daa14c2a1be67c8940b6b6feabb0f8f42fc0e2d7c2458e2f048e" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.562376 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-7d6f5d799-4gr4q_openstack-operators(757566f7-a07b-4623-8668-b39f715ea7a9)\"" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" podUID="757566f7-a07b-4623-8668-b39f715ea7a9" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.568579 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-bz9j9" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.601927 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.601967 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.602494 4909 scope.go:117] "RemoveContainer" containerID="46550aa01be1e94397316e0aa3bdac273e255140e5a727a3cc26a4e6b6c20b30" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.602730 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-6b56b8849f-fd6dq_openstack-operators(0f99fe6f-9209-4c74-9bcb-619212d7812e)\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" podUID="0f99fe6f-9209-4c74-9bcb-619212d7812e" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.606708 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.607216 4909 scope.go:117] "RemoveContainer" containerID="0ebf42b05e16b6ffbd9580b90f66273bb86f08a71e9e3cec7d1356bf922df906" Nov 26 07:24:52 crc kubenswrapper[4909]: E1126 07:24:52.607472 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-58dcdd989d-ctkx2_metallb-system(8ace07e4-e65b-451c-8623-f71b4f7d4f14)\"" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" podUID="8ace07e4-e65b-451c-8623-f71b4f7d4f14" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.617199 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.617910 4909 scope.go:117] "RemoveContainer" containerID="56ebc0c15c90d64e49ad7319712fb589c87dfa4655684e9102faa885a9c20e2c" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.620260 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.633229 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.634143 4909 scope.go:117] "RemoveContainer" containerID="00244fbe768c0bbb30adff3ba5e8722a86cc361cdf04193c7bb27635317b5c78" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.634920 4909 scope.go:117] "RemoveContainer" containerID="ab73a6dfe2de6e6ed948050bd837e445392bd72d77314d1a71b0f211e84241d8" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.667310 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.667366 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.668006 4909 scope.go:117] "RemoveContainer" containerID="f6138a1f55b88f5c1790a98c203b524cb85b7e08912bbb77286fea76f6732691" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.704390 4909 scope.go:117] "RemoveContainer" containerID="1e80c5f888565e4393bdf4da78236f3fb1563e2468647edf16843f0be24c0ddb" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.745333 4909 scope.go:117] "RemoveContainer" containerID="3c96b7a91f73c9cb8cf8092250fe91c63dbd4554991f79f360da82fd79f86748" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.777788 4909 scope.go:117] "RemoveContainer" containerID="2dc9c82f9ef3462d3cd37273e34fc4bab4fd6602fe6b6d12fb45224716256fec" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.809487 4909 scope.go:117] "RemoveContainer" containerID="cb8f7b33bfb61a20cf5ee2459c159a657244e4f720651db26ecd7953a49c48ef" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.823287 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 26 07:24:52 crc kubenswrapper[4909]: I1126 07:24:52.947789 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.064004 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.420382 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.421246 4909 scope.go:117] "RemoveContainer" containerID="ee8d1f2d4ed9b13983d75a1ba0dfde64ce89cdcc842927228a623068442fde74" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.441120 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.579086 4909 generic.go:334] "Generic (PLEG): container finished" podID="af4a09dd-04e0-465d-a817-bacf1a52babe" containerID="e0f30d1d8fef4008a430567ff4a441b4a2e3652ff27aa09b475429f8800b45b8" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.579183 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" event={"ID":"af4a09dd-04e0-465d-a817-bacf1a52babe","Type":"ContainerDied","Data":"e0f30d1d8fef4008a430567ff4a441b4a2e3652ff27aa09b475429f8800b45b8"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.579309 4909 scope.go:117] "RemoveContainer" containerID="00244fbe768c0bbb30adff3ba5e8722a86cc361cdf04193c7bb27635317b5c78" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.580007 4909 scope.go:117] "RemoveContainer" containerID="e0f30d1d8fef4008a430567ff4a441b4a2e3652ff27aa09b475429f8800b45b8" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.580274 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-6b6c55ffd5-dhp84_openstack-operators(af4a09dd-04e0-465d-a817-bacf1a52babe)\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" podUID="af4a09dd-04e0-465d-a817-bacf1a52babe" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.593052 4909 generic.go:334] "Generic (PLEG): container finished" podID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" containerID="0e4013bbe9b8aa0947023e081fbbe1578c436e9f5db81e969a3225e26b661f65" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.593191 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" event={"ID":"4a162aeb-8377-45aa-bd44-6b8aed2f93fb","Type":"ContainerDied","Data":"0e4013bbe9b8aa0947023e081fbbe1578c436e9f5db81e969a3225e26b661f65"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.594064 4909 scope.go:117] "RemoveContainer" containerID="0e4013bbe9b8aa0947023e081fbbe1578c436e9f5db81e969a3225e26b661f65" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.594345 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79d658b66d-swdlm_openstack-operators(4a162aeb-8377-45aa-bd44-6b8aed2f93fb)\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" podUID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.599673 4909 generic.go:334] "Generic (PLEG): container finished" podID="cad0b373-54da-4331-aa01-27d08edaa1ef" containerID="e17611e4c9ec244f287c6a95c9e820878c31866e9f0c38c8318657b24acdcb04" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.599722 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" event={"ID":"cad0b373-54da-4331-aa01-27d08edaa1ef","Type":"ContainerDied","Data":"e17611e4c9ec244f287c6a95c9e820878c31866e9f0c38c8318657b24acdcb04"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.600097 4909 scope.go:117] "RemoveContainer" containerID="e17611e4c9ec244f287c6a95c9e820878c31866e9f0c38c8318657b24acdcb04" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.600270 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-5b67cfc8fb-xcrzl_openstack-operators(cad0b373-54da-4331-aa01-27d08edaa1ef)\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.602207 4909 generic.go:334] "Generic (PLEG): container finished" podID="61289245-0b12-4689-8a98-2b24544cacf8" containerID="a1983bef2f38ce59a80c4bcdd0714d1d41c2900d221988b5119c50489ed1764b" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.602411 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" event={"ID":"61289245-0b12-4689-8a98-2b24544cacf8","Type":"ContainerDied","Data":"a1983bef2f38ce59a80c4bcdd0714d1d41c2900d221988b5119c50489ed1764b"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.603009 4909 scope.go:117] "RemoveContainer" containerID="a1983bef2f38ce59a80c4bcdd0714d1d41c2900d221988b5119c50489ed1764b" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.603438 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-7979c68bc7-c696l_openstack-operators(61289245-0b12-4689-8a98-2b24544cacf8)\"" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" podUID="61289245-0b12-4689-8a98-2b24544cacf8" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.608641 4909 generic.go:334] "Generic (PLEG): container finished" podID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" containerID="ce691a3becefea2cf38b093c0a276ed1823b23629d4122bdadfd2187d3de27c7" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.608938 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" event={"ID":"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4","Type":"ContainerDied","Data":"ce691a3becefea2cf38b093c0a276ed1823b23629d4122bdadfd2187d3de27c7"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.609946 4909 scope.go:117] "RemoveContainer" containerID="ce691a3becefea2cf38b093c0a276ed1823b23629d4122bdadfd2187d3de27c7" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.610654 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-577c5f6d94-d44wm_openstack-operators(ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4)\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.610672 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.612094 4909 generic.go:334] "Generic (PLEG): container finished" podID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" containerID="76644da2f47976cd1865f1ca5ce32f23ff809ead5064f24be26581303cd8afb2" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.612145 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" event={"ID":"8c9c6404-9f47-434c-ac1b-d08cd48d5156","Type":"ContainerDied","Data":"76644da2f47976cd1865f1ca5ce32f23ff809ead5064f24be26581303cd8afb2"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.612403 4909 scope.go:117] "RemoveContainer" containerID="76644da2f47976cd1865f1ca5ce32f23ff809ead5064f24be26581303cd8afb2" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.612569 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-54485f899-8486p_openstack-operators(8c9c6404-9f47-434c-ac1b-d08cd48d5156)\"" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" podUID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.620177 4909 generic.go:334] "Generic (PLEG): container finished" podID="10e6987e-11d4-4c64-bc26-bb45590f3fff" containerID="46e15c3fe6ed6745838bcc08d682925eadc726b5b9839dc5831b9f99f4b3ad07" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.620281 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" event={"ID":"10e6987e-11d4-4c64-bc26-bb45590f3fff","Type":"ContainerDied","Data":"46e15c3fe6ed6745838bcc08d682925eadc726b5b9839dc5831b9f99f4b3ad07"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.620766 4909 scope.go:117] "RemoveContainer" containerID="46e15c3fe6ed6745838bcc08d682925eadc726b5b9839dc5831b9f99f4b3ad07" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.620948 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-867d87977b-5t9sx_openstack-operators(10e6987e-11d4-4c64-bc26-bb45590f3fff)\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.626326 4909 scope.go:117] "RemoveContainer" containerID="bf72c72329779296739bb26f0ab0ed629d06e2935b1e7a1e2543ad964fd70068" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.628242 4909 generic.go:334] "Generic (PLEG): container finished" podID="f8afd5eb-02e8-4a94-be0d-19a709270945" containerID="c04a70165eea314d855ce7944286929e6ca19899172a5cc0fd9d590f6d8caa2d" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.628435 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" event={"ID":"f8afd5eb-02e8-4a94-be0d-19a709270945","Type":"ContainerDied","Data":"c04a70165eea314d855ce7944286929e6ca19899172a5cc0fd9d590f6d8caa2d"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.629076 4909 scope.go:117] "RemoveContainer" containerID="c04a70165eea314d855ce7944286929e6ca19899172a5cc0fd9d590f6d8caa2d" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.629349 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-58487d9bf4-7rjcs_openstack-operators(f8afd5eb-02e8-4a94-be0d-19a709270945)\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" podUID="f8afd5eb-02e8-4a94-be0d-19a709270945" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.636965 4909 generic.go:334] "Generic (PLEG): container finished" podID="9f41a032-71ff-4608-aa2c-b16469fe55a0" containerID="7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.637043 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerDied","Data":"7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.637525 4909 scope.go:117] "RemoveContainer" containerID="7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.637846 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-646fd589f9-phclr_openstack-operators(9f41a032-71ff-4608-aa2c-b16469fe55a0)\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" podUID="9f41a032-71ff-4608-aa2c-b16469fe55a0" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.639131 4909 generic.go:334] "Generic (PLEG): container finished" podID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" containerID="f36803da5f2e1818c917c7c8fa6632bbf89db6edd34fff7b9980971cc8077692" exitCode=1 Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.639158 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" event={"ID":"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef","Type":"ContainerDied","Data":"f36803da5f2e1818c917c7c8fa6632bbf89db6edd34fff7b9980971cc8077692"} Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.639863 4909 scope.go:117] "RemoveContainer" containerID="f36803da5f2e1818c917c7c8fa6632bbf89db6edd34fff7b9980971cc8077692" Nov 26 07:24:53 crc kubenswrapper[4909]: E1126 07:24:53.640130 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-64d7c556cd-872rr_openstack-operators(cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef)\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.686130 4909 scope.go:117] "RemoveContainer" containerID="c1e064cfab488367ef03507abc5fcd093286ba98197b94e60836635ebd37ab9d" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.701806 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.727706 4909 scope.go:117] "RemoveContainer" containerID="0549ff9b555e8e81a4f23f1dcf94be262695608d9edb908bbc598a4181f19204" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.750676 4909 scope.go:117] "RemoveContainer" containerID="f6138a1f55b88f5c1790a98c203b524cb85b7e08912bbb77286fea76f6732691" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.770241 4909 scope.go:117] "RemoveContainer" containerID="bd3ffcc10e90834af6436a49dd06e7bafbec5df26fe4e03b67a0337fab8666fc" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.809558 4909 scope.go:117] "RemoveContainer" containerID="c1ad00ba3622d62a15b07e1eb86c9b1a64cf5bbb4c09a875dff3539d82441f89" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.835735 4909 scope.go:117] "RemoveContainer" containerID="ec934b246a7b768f7d74a7fdef300a7c00eedb3a9eb3304a0eecbe6905f071c5" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.866256 4909 scope.go:117] "RemoveContainer" containerID="f1af1c8b05f92459818d7d97a278aa200bcf8271932c65434af2871a74d105de" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.878535 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.896375 4909 scope.go:117] "RemoveContainer" containerID="56ebc0c15c90d64e49ad7319712fb589c87dfa4655684e9102faa885a9c20e2c" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.950057 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.951731 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-kdp8v" Nov 26 07:24:53 crc kubenswrapper[4909]: I1126 07:24:53.966410 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.188900 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.389324 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.484366 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.498726 4909 scope.go:117] "RemoveContainer" containerID="407342b53828599091eee6f806cc95bfae1eb8bedb6d5b23c4b75475d569cbf7" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.524381 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.552085 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.561558 4909 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.567704 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tdd8v" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.580866 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.592974 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.612643 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.655586 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.661328 4909 generic.go:334] "Generic (PLEG): container finished" podID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" containerID="0a3e236b32aacbd98907cad84f2cacdbf1a91b60bdcf515043a1f44171537a50" exitCode=1 Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.661406 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" event={"ID":"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c","Type":"ContainerDied","Data":"0a3e236b32aacbd98907cad84f2cacdbf1a91b60bdcf515043a1f44171537a50"} Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.661443 4909 scope.go:117] "RemoveContainer" containerID="ee8d1f2d4ed9b13983d75a1ba0dfde64ce89cdcc842927228a623068442fde74" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.662520 4909 scope.go:117] "RemoveContainer" containerID="0a3e236b32aacbd98907cad84f2cacdbf1a91b60bdcf515043a1f44171537a50" Nov 26 07:24:54 crc kubenswrapper[4909]: E1126 07:24:54.663413 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-68c78b6ff8-dmnlq_openstack-operators(fea4eb2c-ad33-4504-a4e4-8c82875b2d0c)\"" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" podUID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.676925 4909 scope.go:117] "RemoveContainer" containerID="7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517" Nov 26 07:24:54 crc kubenswrapper[4909]: E1126 07:24:54.677766 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-646fd589f9-phclr_openstack-operators(9f41a032-71ff-4608-aa2c-b16469fe55a0)\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" podUID="9f41a032-71ff-4608-aa2c-b16469fe55a0" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.686552 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.777118 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.807280 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.832257 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.893160 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 07:24:54 crc kubenswrapper[4909]: I1126 07:24:54.973983 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.029659 4909 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.041460 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.043578 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.043690 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.044353 4909 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.044408 4909 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36f0eedf-d76a-4104-920a-3b2e4c4fb25b" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.051958 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.071209 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.071178133 podStartE2EDuration="14.071178133s" podCreationTimestamp="2025-11-26 07:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 07:24:55.06412854 +0000 UTC m=+1467.210339706" watchObservedRunningTime="2025-11-26 07:24:55.071178133 +0000 UTC m=+1467.217389299" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.167996 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.257395 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.301036 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.334276 4909 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.346990 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.365338 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.410476 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.424766 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.514901 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.527560 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.528774 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.556105 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.655459 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wqp4b" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.699442 4909 generic.go:334] "Generic (PLEG): container finished" podID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" containerID="3cc28ce8d4ecbffd6fb7b4c55259b548eda11ec12aa2bb20cffa00029060bfcf" exitCode=1 Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.699559 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" event={"ID":"20a1b8f0-7e93-4d4a-b527-7470d128a2bc","Type":"ContainerDied","Data":"3cc28ce8d4ecbffd6fb7b4c55259b548eda11ec12aa2bb20cffa00029060bfcf"} Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.699747 4909 scope.go:117] "RemoveContainer" containerID="407342b53828599091eee6f806cc95bfae1eb8bedb6d5b23c4b75475d569cbf7" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.700554 4909 scope.go:117] "RemoveContainer" containerID="3cc28ce8d4ecbffd6fb7b4c55259b548eda11ec12aa2bb20cffa00029060bfcf" Nov 26 07:24:55 crc kubenswrapper[4909]: E1126 07:24:55.700907 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-5f97d8c699-w69tb_openstack-operators(20a1b8f0-7e93-4d4a-b527-7470d128a2bc)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" podUID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.701540 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.719205 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.807139 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.808409 4909 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.886010 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.886094 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.892361 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 26 07:24:55 crc kubenswrapper[4909]: I1126 07:24:55.940115 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.017350 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.051264 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.086090 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.195811 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.215195 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.218415 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.236928 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.265259 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.337617 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-trskf" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.374347 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.376543 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.411377 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.419214 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.428853 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.550038 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.592284 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.643431 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.660895 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-n49jc" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.762058 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.774378 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2m54m" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.801187 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.803907 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7kf4x" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.803985 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.821720 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.863869 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tnxdb" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.955468 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-p47qh" Nov 26 07:24:56 crc kubenswrapper[4909]: I1126 07:24:56.994144 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.023994 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.122678 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.147427 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.206920 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.244512 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.342448 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tpq8q" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.348259 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.351821 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.438209 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.441947 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.500535 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.555004 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2kv7z" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.565919 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.575905 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.591015 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.623271 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.630153 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.685618 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.706424 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-l2fc5" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.730118 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.834393 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.927181 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mbnf5" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.949381 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.964133 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 26 07:24:57 crc kubenswrapper[4909]: I1126 07:24:57.982184 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.033031 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.080893 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.113946 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-fs9fb" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.162073 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.182070 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.276512 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6c945fd485-mgkgv" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.281362 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.356923 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.419153 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.451575 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.453880 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.485899 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xr9z7" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.522654 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.539120 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.580570 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.641557 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.659156 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.748325 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.766791 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.791171 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.827787 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.827812 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.858849 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 26 07:24:58 crc kubenswrapper[4909]: I1126 07:24:58.865772 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.016247 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.021073 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zxnmq" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.084060 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.099650 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dj7bc" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.103201 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.123528 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.124482 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.130333 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-k6xs8" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.148242 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.265864 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.283481 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.329284 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.329787 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.372405 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9q9lk" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.391642 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-txg7z" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.441720 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-v7qjq" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.450090 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.485561 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.542487 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.567292 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.577962 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.595338 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.666086 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.673945 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.708930 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.708943 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.749339 4909 generic.go:334] "Generic (PLEG): container finished" podID="ff1a0925-55ac-478f-a400-44391e090a1d" containerID="c013e6d5ee288db69f4caae45c7cbb840813134bdaa0d5fd1f785824de8e05b1" exitCode=1 Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.749379 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" event={"ID":"ff1a0925-55ac-478f-a400-44391e090a1d","Type":"ContainerDied","Data":"c013e6d5ee288db69f4caae45c7cbb840813134bdaa0d5fd1f785824de8e05b1"} Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.749717 4909 scope.go:117] "RemoveContainer" containerID="c013e6d5ee288db69f4caae45c7cbb840813134bdaa0d5fd1f785824de8e05b1" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.755210 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.778236 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.831703 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.837162 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.855399 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sbqbv" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.862204 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.864940 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.871848 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 26 07:24:59 crc kubenswrapper[4909]: I1126 07:24:59.895664 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.064044 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.065110 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.073443 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.087130 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.097506 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.132481 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.187540 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.229962 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.233102 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.271085 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.306256 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.367906 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.371783 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.480125 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.482016 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.483225 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xs9zf" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.531579 4909 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.531732 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.531805 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.532729 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5e6c9ad8f6da8cf9699e7224aa5b0a87401d251855213141056855b774be1077"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.532973 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://5e6c9ad8f6da8cf9699e7224aa5b0a87401d251855213141056855b774be1077" gracePeriod=30 Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.534254 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fb5c2" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.549945 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.670169 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.756949 4909 generic.go:334] "Generic (PLEG): container finished" podID="ce540878-55f9-495e-8cc1-30402bb55d9f" containerID="8bc6453c4d18ccd3bfefbe19b0c7e26b6c5d86f34b772663446d2795f0cf076f" exitCode=1 Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.757016 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4p4p2" event={"ID":"ce540878-55f9-495e-8cc1-30402bb55d9f","Type":"ContainerDied","Data":"8bc6453c4d18ccd3bfefbe19b0c7e26b6c5d86f34b772663446d2795f0cf076f"} Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.758614 4909 generic.go:334] "Generic (PLEG): container finished" podID="ff1a0925-55ac-478f-a400-44391e090a1d" containerID="88646fe4a9094d55d038d80f4c91571a868df076e4f2a2b1b705b31b610fbed3" exitCode=1 Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.758646 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" event={"ID":"ff1a0925-55ac-478f-a400-44391e090a1d","Type":"ContainerDied","Data":"88646fe4a9094d55d038d80f4c91571a868df076e4f2a2b1b705b31b610fbed3"} Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.758673 4909 scope.go:117] "RemoveContainer" containerID="c013e6d5ee288db69f4caae45c7cbb840813134bdaa0d5fd1f785824de8e05b1" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.759057 4909 scope.go:117] "RemoveContainer" containerID="88646fe4a9094d55d038d80f4c91571a868df076e4f2a2b1b705b31b610fbed3" Nov 26 07:25:00 crc kubenswrapper[4909]: E1126 07:25:00.759321 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-855d9ccff4-86dsq_cert-manager(ff1a0925-55ac-478f-a400-44391e090a1d)\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" podUID="ff1a0925-55ac-478f-a400-44391e090a1d" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.759657 4909 scope.go:117] "RemoveContainer" containerID="8bc6453c4d18ccd3bfefbe19b0c7e26b6c5d86f34b772663446d2795f0cf076f" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.768223 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tnrk4" Nov 26 07:25:00 crc kubenswrapper[4909]: I1126 07:25:00.768481 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.054267 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.127721 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.227672 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.277857 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.279314 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.340180 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.424919 4909 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.428482 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.437146 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.486631 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.491191 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.537234 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.584336 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.584935 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.589358 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.628032 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.650905 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.741652 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.745040 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.767666 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.770089 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4p4p2" event={"ID":"ce540878-55f9-495e-8cc1-30402bb55d9f","Type":"ContainerStarted","Data":"c75cafd8b0409aec05c6dafd9767ddd89c8837183211497fd6ba6faea5ed6b51"} Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.803759 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.815130 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.932780 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.933548 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.934102 4909 scope.go:117] "RemoveContainer" containerID="f922ef067b47cec85c1b029f70245aa42ddba85b2d40d007f291d582e521243d" Nov 26 07:25:01 crc kubenswrapper[4909]: E1126 07:25:01.934302 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-5bfbbb859d-2cwgh_openstack-operators(f7f77917-da54-4e82-a356-80000a53395a)\"" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" podUID="f7f77917-da54-4e82-a356-80000a53395a" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.936384 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-c7gpv" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.940159 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.940654 4909 scope.go:117] "RemoveContainer" containerID="054e7f83fff6d0f887b532bf343d151476b11eba99e8ccc0d467e9518f561cea" Nov 26 07:25:01 crc kubenswrapper[4909]: E1126 07:25:01.940839 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-748967c98-2x9sp_openstack-operators(138eaa02-be79-4e16-8627-cc582d5b6770)\"" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" podUID="138eaa02-be79-4e16-8627-cc582d5b6770" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.975457 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.976210 4909 scope.go:117] "RemoveContainer" containerID="33b2601a1d9c142e8adf599e2031bb8481164a7412550db0ae1a3bc918ea7365" Nov 26 07:25:01 crc kubenswrapper[4909]: E1126 07:25:01.976441 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-6788cc6d75-scqbd_openstack-operators(b3ca7f6d-4dba-4e22-ae42-f4184932fba2)\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" podUID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.989550 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:25:01 crc kubenswrapper[4909]: I1126 07:25:01.990199 4909 scope.go:117] "RemoveContainer" containerID="962e1953cb5b37242bd0b7d60d3076702a548e009da0760b1f505453497e4d0f" Nov 26 07:25:01 crc kubenswrapper[4909]: E1126 07:25:01.990415 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-6bd966bbd4-6j4kw_openstack-operators(cd83d237-7922-4458-9fce-8c296d0ccc0f)\"" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" podUID="cd83d237-7922-4458-9fce-8c296d0ccc0f" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.005251 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.005904 4909 scope.go:117] "RemoveContainer" containerID="bdb7bdfdf8453c4604d534f78f14057e315e0f171be247f50aea612c86a1bace" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.006146 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-698d6fd7d6-692sc_openstack-operators(f4c87de0-5b1c-44f8-a2fb-1949a3f4af03)\"" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" podUID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.007822 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.044340 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.054148 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.054463 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.054667 4909 scope.go:117] "RemoveContainer" containerID="ee38ad77f2fd0b5de89773335f2948ab1663d545529246f231c7f2b572616ad6" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.054919 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-7d5d9fd47f-sphql_openstack-operators(0ebad6d0-e522-4012-869e-903c89bd1703)\"" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" podUID="0ebad6d0-e522-4012-869e-903c89bd1703" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.137221 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.151566 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.152491 4909 scope.go:117] "RemoveContainer" containerID="d5508762a172daa14c2a1be67c8940b6b6feabb0f8f42fc0e2d7c2458e2f048e" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.152830 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-7d6f5d799-4gr4q_openstack-operators(757566f7-a07b-4623-8668-b39f715ea7a9)\"" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" podUID="757566f7-a07b-4623-8668-b39f715ea7a9" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.247066 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.289781 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.303573 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zhvxq" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.360054 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.368542 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.389933 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.393338 4909 scope.go:117] "RemoveContainer" containerID="0e4013bbe9b8aa0947023e081fbbe1578c436e9f5db81e969a3225e26b661f65" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.393767 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79d658b66d-swdlm_openstack-operators(4a162aeb-8377-45aa-bd44-6b8aed2f93fb)\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" podUID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.395283 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.399561 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.401549 4909 scope.go:117] "RemoveContainer" containerID="76644da2f47976cd1865f1ca5ce32f23ff809ead5064f24be26581303cd8afb2" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.402244 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-54485f899-8486p_openstack-operators(8c9c6404-9f47-434c-ac1b-d08cd48d5156)\"" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" podUID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.413780 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.414870 4909 scope.go:117] "RemoveContainer" containerID="a1983bef2f38ce59a80c4bcdd0714d1d41c2900d221988b5119c50489ed1764b" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.415129 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-7979c68bc7-c696l_openstack-operators(61289245-0b12-4689-8a98-2b24544cacf8)\"" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" podUID="61289245-0b12-4689-8a98-2b24544cacf8" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.465510 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.465939 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.466126 4909 scope.go:117] "RemoveContainer" containerID="e17611e4c9ec244f287c6a95c9e820878c31866e9f0c38c8318657b24acdcb04" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.466359 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-5b67cfc8fb-xcrzl_openstack-operators(cad0b373-54da-4331-aa01-27d08edaa1ef)\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.483841 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.484906 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.486769 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.487447 4909 scope.go:117] "RemoveContainer" containerID="46e15c3fe6ed6745838bcc08d682925eadc726b5b9839dc5831b9f99f4b3ad07" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.487808 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-867d87977b-5t9sx_openstack-operators(10e6987e-11d4-4c64-bc26-bb45590f3fff)\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.496102 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.509051 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.556891 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.558176 4909 scope.go:117] "RemoveContainer" containerID="c04a70165eea314d855ce7944286929e6ca19899172a5cc0fd9d590f6d8caa2d" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.558676 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-58487d9bf4-7rjcs_openstack-operators(f8afd5eb-02e8-4a94-be0d-19a709270945)\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" podUID="f8afd5eb-02e8-4a94-be0d-19a709270945" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.616812 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.617649 4909 scope.go:117] "RemoveContainer" containerID="f36803da5f2e1818c917c7c8fa6632bbf89db6edd34fff7b9980971cc8077692" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.617884 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-64d7c556cd-872rr_openstack-operators(cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef)\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.618634 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.633665 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.634308 4909 scope.go:117] "RemoveContainer" containerID="e0f30d1d8fef4008a430567ff4a441b4a2e3652ff27aa09b475429f8800b45b8" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.634543 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-6b6c55ffd5-dhp84_openstack-operators(af4a09dd-04e0-465d-a817-bacf1a52babe)\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" podUID="af4a09dd-04e0-465d-a817-bacf1a52babe" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.667720 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.668419 4909 scope.go:117] "RemoveContainer" containerID="ce691a3becefea2cf38b093c0a276ed1823b23629d4122bdadfd2187d3de27c7" Nov 26 07:25:02 crc kubenswrapper[4909]: E1126 07:25:02.668762 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-577c5f6d94-d44wm_openstack-operators(ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4)\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.691640 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.692794 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.710018 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.720922 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.761582 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.791510 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.858338 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.863379 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.899862 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.934985 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.954555 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8h9nj" Nov 26 07:25:02 crc kubenswrapper[4909]: I1126 07:25:02.966559 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.070214 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.141915 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.182422 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.184453 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.206247 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.225951 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.353148 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.375189 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.395815 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.406112 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.420434 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.421636 4909 scope.go:117] "RemoveContainer" containerID="0a3e236b32aacbd98907cad84f2cacdbf1a91b60bdcf515043a1f44171537a50" Nov 26 07:25:03 crc kubenswrapper[4909]: E1126 07:25:03.422053 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-68c78b6ff8-dmnlq_openstack-operators(fea4eb2c-ad33-4504-a4e4-8c82875b2d0c)\"" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" podUID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.482130 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zm4pk" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.498913 4909 scope.go:117] "RemoveContainer" containerID="04974c57da082b27c1ecc3e323138bd5ea8565218f48c5fb3988d32bad90d303" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.594951 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.634429 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xs58n" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.700908 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.718810 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.792181 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" event={"ID":"5b985112-f6b3-4879-b02e-8ac0e510730b","Type":"ContainerStarted","Data":"8a8fc153dd7222e044c3a8ec2795b7ade935267d18b141e88b2ef3d264f403db"} Nov 26 07:25:03 crc kubenswrapper[4909]: I1126 07:25:03.792880 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.125239 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.137400 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.181151 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.212210 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.232241 4909 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.232489 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b" gracePeriod=5 Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.232722 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.236000 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.241579 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.302998 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.350137 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.408137 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dl4bs" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.499693 4909 scope.go:117] "RemoveContainer" containerID="46550aa01be1e94397316e0aa3bdac273e255140e5a727a3cc26a4e6b6c20b30" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.520794 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nhlth" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.547891 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.620372 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.632293 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.669951 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.686422 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f47fc" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.805405 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" event={"ID":"0f99fe6f-9209-4c74-9bcb-619212d7812e","Type":"ContainerStarted","Data":"e018e52a39f18b07753ddb06db78f5f9a15a772aa7d7057e9a96c27f263f9104"} Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.805739 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.878282 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.914440 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.945382 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.947867 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 26 07:25:04 crc kubenswrapper[4909]: I1126 07:25:04.988891 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.011685 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.013252 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.246145 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.252512 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-hrbnn" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.346185 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.432528 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.499036 4909 scope.go:117] "RemoveContainer" containerID="7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517" Nov 26 07:25:05 crc kubenswrapper[4909]: E1126 07:25:05.499444 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-646fd589f9-phclr_openstack-operators(9f41a032-71ff-4608-aa2c-b16469fe55a0)\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" podUID="9f41a032-71ff-4608-aa2c-b16469fe55a0" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.512152 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.530449 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.593301 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.616966 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.640333 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.767184 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.859374 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 26 07:25:05 crc kubenswrapper[4909]: I1126 07:25:05.884221 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.116223 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.129634 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.210409 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.260259 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.382415 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.426383 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.499417 4909 scope.go:117] "RemoveContainer" containerID="3cc28ce8d4ecbffd6fb7b4c55259b548eda11ec12aa2bb20cffa00029060bfcf" Nov 26 07:25:06 crc kubenswrapper[4909]: E1126 07:25:06.499983 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-5f97d8c699-w69tb_openstack-operators(20a1b8f0-7e93-4d4a-b527-7470d128a2bc)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" podUID="20a1b8f0-7e93-4d4a-b527-7470d128a2bc" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.512214 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.533899 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.596083 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.636392 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.741701 4909 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-zqpgt" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.791289 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.813279 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 26 07:25:06 crc kubenswrapper[4909]: I1126 07:25:06.880396 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.069749 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.169518 4909 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ljk2q" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.301499 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.301626 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.404446 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.499364 4909 scope.go:117] "RemoveContainer" containerID="0ebf42b05e16b6ffbd9580b90f66273bb86f08a71e9e3cec7d1356bf922df906" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.629155 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.643105 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.763451 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.838360 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" event={"ID":"8ace07e4-e65b-451c-8623-f71b4f7d4f14","Type":"ContainerStarted","Data":"541c49b84a1eab5b679453c785513750ee644b0f10c0a43db56ec668674639bf"} Nov 26 07:25:07 crc kubenswrapper[4909]: I1126 07:25:07.838903 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:25:08 crc kubenswrapper[4909]: I1126 07:25:08.015618 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.817654 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.817901 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.865383 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.865457 4909 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b" exitCode=137 Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.865493 4909 scope.go:117] "RemoveContainer" containerID="2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.865574 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.894328 4909 scope.go:117] "RemoveContainer" containerID="2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b" Nov 26 07:25:09 crc kubenswrapper[4909]: E1126 07:25:09.896725 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b\": container with ID starting with 2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b not found: ID does not exist" containerID="2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.896796 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b"} err="failed to get container status \"2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b\": rpc error: code = NotFound desc = could not find container \"2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b\": container with ID starting with 2d9e1ce0a2f92e421ec2bacb183892b40ad4418ef8dfa8ee774bb9aa0944ea9b not found: ID does not exist" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912613 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912665 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912688 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912742 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912727 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912778 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912761 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912888 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.912891 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.913216 4909 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.913244 4909 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.913256 4909 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.913268 4909 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:09 crc kubenswrapper[4909]: I1126 07:25:09.919778 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 07:25:10 crc kubenswrapper[4909]: I1126 07:25:10.015575 4909 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:10 crc kubenswrapper[4909]: I1126 07:25:10.508889 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.934147 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.935376 4909 scope.go:117] "RemoveContainer" containerID="f922ef067b47cec85c1b029f70245aa42ddba85b2d40d007f291d582e521243d" Nov 26 07:25:11 crc kubenswrapper[4909]: E1126 07:25:11.935769 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-5bfbbb859d-2cwgh_openstack-operators(f7f77917-da54-4e82-a356-80000a53395a)\"" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" podUID="f7f77917-da54-4e82-a356-80000a53395a" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.940459 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.941328 4909 scope.go:117] "RemoveContainer" containerID="054e7f83fff6d0f887b532bf343d151476b11eba99e8ccc0d467e9518f561cea" Nov 26 07:25:11 crc kubenswrapper[4909]: E1126 07:25:11.941569 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-748967c98-2x9sp_openstack-operators(138eaa02-be79-4e16-8627-cc582d5b6770)\"" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" podUID="138eaa02-be79-4e16-8627-cc582d5b6770" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.975350 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.975985 4909 scope.go:117] "RemoveContainer" containerID="33b2601a1d9c142e8adf599e2031bb8481164a7412550db0ae1a3bc918ea7365" Nov 26 07:25:11 crc kubenswrapper[4909]: E1126 07:25:11.976200 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-6788cc6d75-scqbd_openstack-operators(b3ca7f6d-4dba-4e22-ae42-f4184932fba2)\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" podUID="b3ca7f6d-4dba-4e22-ae42-f4184932fba2" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.990033 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:25:11 crc kubenswrapper[4909]: I1126 07:25:11.991237 4909 scope.go:117] "RemoveContainer" containerID="962e1953cb5b37242bd0b7d60d3076702a548e009da0760b1f505453497e4d0f" Nov 26 07:25:11 crc kubenswrapper[4909]: E1126 07:25:11.991733 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-6bd966bbd4-6j4kw_openstack-operators(cd83d237-7922-4458-9fce-8c296d0ccc0f)\"" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" podUID="cd83d237-7922-4458-9fce-8c296d0ccc0f" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.004922 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.005474 4909 scope.go:117] "RemoveContainer" containerID="bdb7bdfdf8453c4604d534f78f14057e315e0f171be247f50aea612c86a1bace" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.005679 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-698d6fd7d6-692sc_openstack-operators(f4c87de0-5b1c-44f8-a2fb-1949a3f4af03)\"" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" podUID="f4c87de0-5b1c-44f8-a2fb-1949a3f4af03" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.053875 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.054694 4909 scope.go:117] "RemoveContainer" containerID="ee38ad77f2fd0b5de89773335f2948ab1663d545529246f231c7f2b572616ad6" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.054960 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-7d5d9fd47f-sphql_openstack-operators(0ebad6d0-e522-4012-869e-903c89bd1703)\"" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" podUID="0ebad6d0-e522-4012-869e-903c89bd1703" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.151726 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.152258 4909 scope.go:117] "RemoveContainer" containerID="d5508762a172daa14c2a1be67c8940b6b6feabb0f8f42fc0e2d7c2458e2f048e" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.152507 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-7d6f5d799-4gr4q_openstack-operators(757566f7-a07b-4623-8668-b39f715ea7a9)\"" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" podUID="757566f7-a07b-4623-8668-b39f715ea7a9" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.314423 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.315247 4909 scope.go:117] "RemoveContainer" containerID="7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.315537 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-646fd589f9-phclr_openstack-operators(9f41a032-71ff-4608-aa2c-b16469fe55a0)\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" podUID="9f41a032-71ff-4608-aa2c-b16469fe55a0" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.385170 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.385771 4909 scope.go:117] "RemoveContainer" containerID="0e4013bbe9b8aa0947023e081fbbe1578c436e9f5db81e969a3225e26b661f65" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.385968 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79d658b66d-swdlm_openstack-operators(4a162aeb-8377-45aa-bd44-6b8aed2f93fb)\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" podUID="4a162aeb-8377-45aa-bd44-6b8aed2f93fb" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.398715 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.399448 4909 scope.go:117] "RemoveContainer" containerID="76644da2f47976cd1865f1ca5ce32f23ff809ead5064f24be26581303cd8afb2" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.399767 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-54485f899-8486p_openstack-operators(8c9c6404-9f47-434c-ac1b-d08cd48d5156)\"" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" podUID="8c9c6404-9f47-434c-ac1b-d08cd48d5156" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.411097 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.412301 4909 scope.go:117] "RemoveContainer" containerID="a1983bef2f38ce59a80c4bcdd0714d1d41c2900d221988b5119c50489ed1764b" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.412645 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-7979c68bc7-c696l_openstack-operators(61289245-0b12-4689-8a98-2b24544cacf8)\"" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" podUID="61289245-0b12-4689-8a98-2b24544cacf8" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.465371 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.466182 4909 scope.go:117] "RemoveContainer" containerID="e17611e4c9ec244f287c6a95c9e820878c31866e9f0c38c8318657b24acdcb04" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.466529 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-5b67cfc8fb-xcrzl_openstack-operators(cad0b373-54da-4331-aa01-27d08edaa1ef)\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" podUID="cad0b373-54da-4331-aa01-27d08edaa1ef" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.486733 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.487636 4909 scope.go:117] "RemoveContainer" containerID="46e15c3fe6ed6745838bcc08d682925eadc726b5b9839dc5831b9f99f4b3ad07" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.487980 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-867d87977b-5t9sx_openstack-operators(10e6987e-11d4-4c64-bc26-bb45590f3fff)\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" podUID="10e6987e-11d4-4c64-bc26-bb45590f3fff" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.555377 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-kbwpk" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.556043 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.556969 4909 scope.go:117] "RemoveContainer" containerID="c04a70165eea314d855ce7944286929e6ca19899172a5cc0fd9d590f6d8caa2d" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.557703 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-58487d9bf4-7rjcs_openstack-operators(f8afd5eb-02e8-4a94-be0d-19a709270945)\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" podUID="f8afd5eb-02e8-4a94-be0d-19a709270945" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.603699 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-fd6dq" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.617524 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.619645 4909 scope.go:117] "RemoveContainer" containerID="f36803da5f2e1818c917c7c8fa6632bbf89db6edd34fff7b9980971cc8077692" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.620025 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-64d7c556cd-872rr_openstack-operators(cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef)\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" podUID="cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.634206 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.634819 4909 scope.go:117] "RemoveContainer" containerID="e0f30d1d8fef4008a430567ff4a441b4a2e3652ff27aa09b475429f8800b45b8" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.635044 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-6b6c55ffd5-dhp84_openstack-operators(af4a09dd-04e0-465d-a817-bacf1a52babe)\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" podUID="af4a09dd-04e0-465d-a817-bacf1a52babe" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.668102 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:25:12 crc kubenswrapper[4909]: I1126 07:25:12.668877 4909 scope.go:117] "RemoveContainer" containerID="ce691a3becefea2cf38b093c0a276ed1823b23629d4122bdadfd2187d3de27c7" Nov 26 07:25:12 crc kubenswrapper[4909]: E1126 07:25:12.669135 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-577c5f6d94-d44wm_openstack-operators(ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4)\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" podUID="ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4" Nov 26 07:25:13 crc kubenswrapper[4909]: I1126 07:25:13.419813 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:25:13 crc kubenswrapper[4909]: I1126 07:25:13.420261 4909 scope.go:117] "RemoveContainer" containerID="0a3e236b32aacbd98907cad84f2cacdbf1a91b60bdcf515043a1f44171537a50" Nov 26 07:25:13 crc kubenswrapper[4909]: E1126 07:25:13.420448 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-68c78b6ff8-dmnlq_openstack-operators(fea4eb2c-ad33-4504-a4e4-8c82875b2d0c)\"" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" podUID="fea4eb2c-ad33-4504-a4e4-8c82875b2d0c" Nov 26 07:25:14 crc kubenswrapper[4909]: I1126 07:25:14.500070 4909 scope.go:117] "RemoveContainer" containerID="88646fe4a9094d55d038d80f4c91571a868df076e4f2a2b1b705b31b610fbed3" Nov 26 07:25:14 crc kubenswrapper[4909]: I1126 07:25:14.910639 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-86dsq" event={"ID":"ff1a0925-55ac-478f-a400-44391e090a1d","Type":"ContainerStarted","Data":"203a680bbe80900ab5c6ec428180cbf013d9f5fb3dc7cbfd519ca6f1c0bdf477"} Nov 26 07:25:21 crc kubenswrapper[4909]: I1126 07:25:21.499461 4909 scope.go:117] "RemoveContainer" containerID="3cc28ce8d4ecbffd6fb7b4c55259b548eda11ec12aa2bb20cffa00029060bfcf" Nov 26 07:25:21 crc kubenswrapper[4909]: I1126 07:25:21.974138 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-w69tb" event={"ID":"20a1b8f0-7e93-4d4a-b527-7470d128a2bc","Type":"ContainerStarted","Data":"e49f037fb41be173255774d286806fdf042727f870076abf598e51ce51b100e9"} Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.499736 4909 scope.go:117] "RemoveContainer" containerID="bdb7bdfdf8453c4604d534f78f14057e315e0f171be247f50aea612c86a1bace" Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.500001 4909 scope.go:117] "RemoveContainer" containerID="962e1953cb5b37242bd0b7d60d3076702a548e009da0760b1f505453497e4d0f" Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.500245 4909 scope.go:117] "RemoveContainer" containerID="f922ef067b47cec85c1b029f70245aa42ddba85b2d40d007f291d582e521243d" Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.987883 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" event={"ID":"f4c87de0-5b1c-44f8-a2fb-1949a3f4af03","Type":"ContainerStarted","Data":"aee7e375a1cdc302e2ec65c0734bbb33c775395357565decfbffbadcb12c43b8"} Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.990142 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.995968 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" event={"ID":"cd83d237-7922-4458-9fce-8c296d0ccc0f","Type":"ContainerStarted","Data":"700a915337771d6b739743d1286f196127c2626be826c28ae4e71a3b3d941714"} Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.996566 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.998506 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" event={"ID":"f7f77917-da54-4e82-a356-80000a53395a","Type":"ContainerStarted","Data":"b8eec5509eaec569235d975d6bdde87c74c7443eb73eecb26b743a2c5dac0fcf"} Nov 26 07:25:22 crc kubenswrapper[4909]: I1126 07:25:22.999025 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:25:23 crc kubenswrapper[4909]: I1126 07:25:23.499322 4909 scope.go:117] "RemoveContainer" containerID="a1983bef2f38ce59a80c4bcdd0714d1d41c2900d221988b5119c50489ed1764b" Nov 26 07:25:24 crc kubenswrapper[4909]: I1126 07:25:24.012866 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" event={"ID":"61289245-0b12-4689-8a98-2b24544cacf8","Type":"ContainerStarted","Data":"f738e4e0d30101adc82b3cfa198e4af773fe532d050123311630bbb4a91bba03"} Nov 26 07:25:24 crc kubenswrapper[4909]: I1126 07:25:24.015334 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:25:24 crc kubenswrapper[4909]: I1126 07:25:24.500347 4909 scope.go:117] "RemoveContainer" containerID="c04a70165eea314d855ce7944286929e6ca19899172a5cc0fd9d590f6d8caa2d" Nov 26 07:25:24 crc kubenswrapper[4909]: I1126 07:25:24.500517 4909 scope.go:117] "RemoveContainer" containerID="054e7f83fff6d0f887b532bf343d151476b11eba99e8ccc0d467e9518f561cea" Nov 26 07:25:24 crc kubenswrapper[4909]: I1126 07:25:24.500689 4909 scope.go:117] "RemoveContainer" containerID="33b2601a1d9c142e8adf599e2031bb8481164a7412550db0ae1a3bc918ea7365" Nov 26 07:25:24 crc kubenswrapper[4909]: I1126 07:25:24.501118 4909 scope.go:117] "RemoveContainer" containerID="46e15c3fe6ed6745838bcc08d682925eadc726b5b9839dc5831b9f99f4b3ad07" Nov 26 07:25:24 crc kubenswrapper[4909]: I1126 07:25:24.501551 4909 scope.go:117] "RemoveContainer" containerID="0e4013bbe9b8aa0947023e081fbbe1578c436e9f5db81e969a3225e26b661f65" Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.023665 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" event={"ID":"4a162aeb-8377-45aa-bd44-6b8aed2f93fb","Type":"ContainerStarted","Data":"d19e939ffc1dd643f085d0a8c322af3ed3d37ea7b975442d4f20f14ac2198f7f"} Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.024350 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.025827 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" event={"ID":"f8afd5eb-02e8-4a94-be0d-19a709270945","Type":"ContainerStarted","Data":"38f4ac2286348ea7e8d5770ed9301ed6514216ea4a0b711f7cbfb55d8bb227ee"} Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.025999 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.027684 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" event={"ID":"b3ca7f6d-4dba-4e22-ae42-f4184932fba2","Type":"ContainerStarted","Data":"d1a2c5b4d5966cb95accf1fece23583ce7080e1c0f55a200c8f50fd481f8d88f"} Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.027957 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.030999 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" event={"ID":"10e6987e-11d4-4c64-bc26-bb45590f3fff","Type":"ContainerStarted","Data":"87f7157c4d156675d28f3e67443bf466b0a3f1ac4d9b7c410a7cab8b5a7df756"} Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.031347 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.033180 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" event={"ID":"138eaa02-be79-4e16-8627-cc582d5b6770","Type":"ContainerStarted","Data":"46b6e5dc057fd369c19b3a3f8fb2c4ccec03c349c26e05d67f0edc243f264b51"} Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.499262 4909 scope.go:117] "RemoveContainer" containerID="d5508762a172daa14c2a1be67c8940b6b6feabb0f8f42fc0e2d7c2458e2f048e" Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.499292 4909 scope.go:117] "RemoveContainer" containerID="f36803da5f2e1818c917c7c8fa6632bbf89db6edd34fff7b9980971cc8077692" Nov 26 07:25:25 crc kubenswrapper[4909]: I1126 07:25:25.499424 4909 scope.go:117] "RemoveContainer" containerID="e17611e4c9ec244f287c6a95c9e820878c31866e9f0c38c8318657b24acdcb04" Nov 26 07:25:26 crc kubenswrapper[4909]: I1126 07:25:26.046789 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" event={"ID":"cad0b373-54da-4331-aa01-27d08edaa1ef","Type":"ContainerStarted","Data":"0d4429024d1c686b6da957f3b64fafb2b0faf5ead8285331fd77be35cd89ad02"} Nov 26 07:25:26 crc kubenswrapper[4909]: I1126 07:25:26.047095 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:25:26 crc kubenswrapper[4909]: I1126 07:25:26.049888 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" event={"ID":"cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef","Type":"ContainerStarted","Data":"7994da3538736a1b1ec9be6b9c1f5ab4a8d93c1785412f079147d490503cf3c4"} Nov 26 07:25:26 crc kubenswrapper[4909]: I1126 07:25:26.050128 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:25:26 crc kubenswrapper[4909]: I1126 07:25:26.052971 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" event={"ID":"757566f7-a07b-4623-8668-b39f715ea7a9","Type":"ContainerStarted","Data":"86ee7940862a61c6b7f8d5b6e4ab7dfac5eef9817e424913aa6efc27dbbfa2f5"} Nov 26 07:25:26 crc kubenswrapper[4909]: I1126 07:25:26.498833 4909 scope.go:117] "RemoveContainer" containerID="ce691a3becefea2cf38b093c0a276ed1823b23629d4122bdadfd2187d3de27c7" Nov 26 07:25:26 crc kubenswrapper[4909]: I1126 07:25:26.499637 4909 scope.go:117] "RemoveContainer" containerID="ee38ad77f2fd0b5de89773335f2948ab1663d545529246f231c7f2b572616ad6" Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.066989 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" event={"ID":"0ebad6d0-e522-4012-869e-903c89bd1703","Type":"ContainerStarted","Data":"ad9ee896a0c6b8446a29ed65b499ba3e728120c77dc01153ca63662f1d918af8"} Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.067981 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.071140 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" event={"ID":"ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4","Type":"ContainerStarted","Data":"dcf14f34ea66c0524a2026993d71113a2bc2bd115f8199c75753fda305a981bb"} Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.071681 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.499759 4909 scope.go:117] "RemoveContainer" containerID="76644da2f47976cd1865f1ca5ce32f23ff809ead5064f24be26581303cd8afb2" Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.499916 4909 scope.go:117] "RemoveContainer" containerID="7d10158945536353703d45d536ec03e2c0cbc9628fcbb5f2b98f7a6280016517" Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.501892 4909 scope.go:117] "RemoveContainer" containerID="0a3e236b32aacbd98907cad84f2cacdbf1a91b60bdcf515043a1f44171537a50" Nov 26 07:25:27 crc kubenswrapper[4909]: I1126 07:25:27.962714 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 26 07:25:28 crc kubenswrapper[4909]: I1126 07:25:28.081713 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" event={"ID":"9f41a032-71ff-4608-aa2c-b16469fe55a0","Type":"ContainerStarted","Data":"892f42673f40a4c512258468db1bb8557bb9f3cb50e7f6b19679156a4608d337"} Nov 26 07:25:28 crc kubenswrapper[4909]: I1126 07:25:28.081971 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:25:28 crc kubenswrapper[4909]: I1126 07:25:28.084513 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" event={"ID":"fea4eb2c-ad33-4504-a4e4-8c82875b2d0c","Type":"ContainerStarted","Data":"9e8e0d32e9bd056f378ea55dea5bbd946275f63d35d284eb12567736105ace3a"} Nov 26 07:25:28 crc kubenswrapper[4909]: I1126 07:25:28.084716 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:25:28 crc kubenswrapper[4909]: I1126 07:25:28.086237 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" event={"ID":"8c9c6404-9f47-434c-ac1b-d08cd48d5156","Type":"ContainerStarted","Data":"c9af334774dd74bdf5ed0cc88ab0e30256df87cfdfb8ff6a533f9bf6376598e0"} Nov 26 07:25:28 crc kubenswrapper[4909]: I1126 07:25:28.086567 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:25:28 crc kubenswrapper[4909]: I1126 07:25:28.547316 4909 scope.go:117] "RemoveContainer" containerID="e0f30d1d8fef4008a430567ff4a441b4a2e3652ff27aa09b475429f8800b45b8" Nov 26 07:25:29 crc kubenswrapper[4909]: I1126 07:25:29.084890 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 26 07:25:29 crc kubenswrapper[4909]: I1126 07:25:29.096395 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" event={"ID":"af4a09dd-04e0-465d-a817-bacf1a52babe","Type":"ContainerStarted","Data":"751a04861c44b7222daf9cf53d291c0dbb26d66ecabe3d234783135a996ca635"} Nov 26 07:25:29 crc kubenswrapper[4909]: I1126 07:25:29.096640 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:25:29 crc kubenswrapper[4909]: I1126 07:25:29.098155 4909 generic.go:334] "Generic (PLEG): container finished" podID="59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4" containerID="536faeed70c9a05a03076564921bf46c5aa9037f3fb42ec5c946ca42f55e2412" exitCode=0 Nov 26 07:25:29 crc kubenswrapper[4909]: I1126 07:25:29.098255 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" event={"ID":"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4","Type":"ContainerDied","Data":"536faeed70c9a05a03076564921bf46c5aa9037f3fb42ec5c946ca42f55e2412"} Nov 26 07:25:29 crc kubenswrapper[4909]: I1126 07:25:29.098950 4909 scope.go:117] "RemoveContainer" containerID="536faeed70c9a05a03076564921bf46c5aa9037f3fb42ec5c946ca42f55e2412" Nov 26 07:25:30 crc kubenswrapper[4909]: I1126 07:25:30.108719 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" event={"ID":"59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4","Type":"ContainerStarted","Data":"6f1fad02011e14f944ddb39e4510595a2d9d33fdae43897d67e2a8aee666ad66"} Nov 26 07:25:30 crc kubenswrapper[4909]: I1126 07:25:30.109289 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:25:30 crc kubenswrapper[4909]: I1126 07:25:30.113040 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s7vvj" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.121515 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.123973 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.124021 4909 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5e6c9ad8f6da8cf9699e7224aa5b0a87401d251855213141056855b774be1077" exitCode=137 Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.124093 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5e6c9ad8f6da8cf9699e7224aa5b0a87401d251855213141056855b774be1077"} Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.124143 4909 scope.go:117] "RemoveContainer" containerID="00afc110728f6cbfa96cbe2462114e4808d60ced987d5e07174ff39c1728310c" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.938343 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-2cwgh" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.939684 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.942434 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-748967c98-2x9sp" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.978300 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-scqbd" Nov 26 07:25:31 crc kubenswrapper[4909]: I1126 07:25:31.993098 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-6j4kw" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.016144 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-692sc" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.057271 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-sphql" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.132563 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.133688 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"043dce96a1a30f54438c63f569e2dd06368ada2fecbb54598f30c5b4cc3dc923"} Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.151935 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.155609 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-4gr4q" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.316910 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-phclr" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.387540 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-swdlm" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.402270 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-54485f899-8486p" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.413555 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-c696l" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.469726 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-xcrzl" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.490925 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-867d87977b-5t9sx" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.559495 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-7rjcs" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.618974 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-872rr" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.656255 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 26 07:25:32 crc kubenswrapper[4909]: I1126 07:25:32.673609 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-d44wm" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.283554 4909 scope.go:117] "RemoveContainer" containerID="ef965809615405e1783c11f175843f12ff2d4725c7daecb6e6327caf95f0a466" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.334067 4909 scope.go:117] "RemoveContainer" containerID="6210da3d155444e7f371d4bca257df57852024396456b640445db6f06a1f1fd5" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.380510 4909 scope.go:117] "RemoveContainer" containerID="ed9c1df95b174069c67e0e39f7fdb40918daee66332e03c73a3218bd35e863f3" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.426248 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68c78b6ff8-dmnlq" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.428001 4909 scope.go:117] "RemoveContainer" containerID="5e90b4e5ac6667f9daf762f2e6bc66cd6b609799cd175f03b8cc8c7f7b63151c" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.482433 4909 scope.go:117] "RemoveContainer" containerID="a05a2e8d981ebb4cc5877598dba394fd26e24d76c6a72edee8536bc2f0214b86" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.508073 4909 scope.go:117] "RemoveContainer" containerID="e41849efae31b0c8581f7c9f6ee28750c66a400db628e1800e2864b8a75f5b77" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.531548 4909 scope.go:117] "RemoveContainer" containerID="18c69a50eb20ebbdeb9c3c4cec5b96f232a261f134a17ae2bf389ddcaf0b29a6" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.557615 4909 scope.go:117] "RemoveContainer" containerID="b16d29712c871d2500018cadeedd75995eef5d750f3d147370f67dfe52f6384f" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.588111 4909 scope.go:117] "RemoveContainer" containerID="7dbcd9530f98b4291aedc42f04a1ccaf1afe80da54613d30af8e73b73490b9c0" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.637677 4909 scope.go:117] "RemoveContainer" containerID="f830e018627073977f605e520fbf64ada9095f6bb653e33ef0ca390f3eb5fabe" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.659148 4909 scope.go:117] "RemoveContainer" containerID="91d1d8757dc97d8802d5b7224b14de05dd0bc7dc749f4656e24f8e2938bed616" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.680102 4909 scope.go:117] "RemoveContainer" containerID="28547ad618498ebe7793a9e4cfb0178020778a8b517d6b749e977e83c72864c4" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.700715 4909 scope.go:117] "RemoveContainer" containerID="6ce5bc27dbcd8bc437bbc74ad6462b2ac8d4570a131a3d43b4e39c235a6f2b13" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.723033 4909 scope.go:117] "RemoveContainer" containerID="b0232145ed4b3712ecaad8243ac7d77b6582f6fbaac7a7c0a418835faaca93d0" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.750101 4909 scope.go:117] "RemoveContainer" containerID="6c56e8b96818340bf2a6e82e312eadfbbb8344acf6710722d0e92ef85ab96ebb" Nov 26 07:25:33 crc kubenswrapper[4909]: I1126 07:25:33.800498 4909 scope.go:117] "RemoveContainer" containerID="e94e839d97164314b0ed8d5b6e1c88d716cfd42776a9c7b163da0d38843a1b27" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.672717 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mmqrq"] Nov 26 07:25:34 crc kubenswrapper[4909]: E1126 07:25:34.673030 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" containerName="installer" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.673040 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" containerName="installer" Nov 26 07:25:34 crc kubenswrapper[4909]: E1126 07:25:34.673059 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.673064 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.673207 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.673219 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49be5a2-7d31-4cf1-89cb-205755ea8592" containerName="installer" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.674262 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.689562 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmqrq"] Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.826073 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-utilities\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.826132 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxr4\" (UniqueName: \"kubernetes.io/projected/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-kube-api-access-glxr4\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.826183 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-catalog-content\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.927964 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-utilities\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.928021 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glxr4\" (UniqueName: \"kubernetes.io/projected/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-kube-api-access-glxr4\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.928080 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-catalog-content\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.928543 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-utilities\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.928670 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-catalog-content\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.947932 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glxr4\" (UniqueName: \"kubernetes.io/projected/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-kube-api-access-glxr4\") pod \"community-operators-mmqrq\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:34 crc kubenswrapper[4909]: I1126 07:25:34.992753 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:35 crc kubenswrapper[4909]: I1126 07:25:35.288743 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-mn5hl" Nov 26 07:25:35 crc kubenswrapper[4909]: I1126 07:25:35.348138 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmqrq"] Nov 26 07:25:35 crc kubenswrapper[4909]: W1126 07:25:35.352526 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d456cd_a10d_4a92_a7b2_ab6269f7297d.slice/crio-d00252ca95f1371613e9d2dad1f13f0c7c0bb77fb3486b16ac7abd926300561d WatchSource:0}: Error finding container d00252ca95f1371613e9d2dad1f13f0c7c0bb77fb3486b16ac7abd926300561d: Status 404 returned error can't find the container with id d00252ca95f1371613e9d2dad1f13f0c7c0bb77fb3486b16ac7abd926300561d Nov 26 07:25:36 crc kubenswrapper[4909]: I1126 07:25:36.163809 4909 generic.go:334] "Generic (PLEG): container finished" podID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerID="19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221" exitCode=0 Nov 26 07:25:36 crc kubenswrapper[4909]: I1126 07:25:36.163860 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmqrq" event={"ID":"b4d456cd-a10d-4a92-a7b2-ab6269f7297d","Type":"ContainerDied","Data":"19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221"} Nov 26 07:25:36 crc kubenswrapper[4909]: I1126 07:25:36.163889 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmqrq" event={"ID":"b4d456cd-a10d-4a92-a7b2-ab6269f7297d","Type":"ContainerStarted","Data":"d00252ca95f1371613e9d2dad1f13f0c7c0bb77fb3486b16ac7abd926300561d"} Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.062552 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjkw9"] Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.063145 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kjkw9" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="registry-server" containerID="cri-o://2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e" gracePeriod=2 Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.175080 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmqrq" event={"ID":"b4d456cd-a10d-4a92-a7b2-ab6269f7297d","Type":"ContainerStarted","Data":"550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944"} Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.301324 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.301734 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.488750 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.665265 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwcqz\" (UniqueName: \"kubernetes.io/projected/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-kube-api-access-vwcqz\") pod \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.665420 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-utilities\") pod \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.665486 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-catalog-content\") pod \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\" (UID: \"e3c784d2-fc47-4ff8-b47e-e05fcf89613a\") " Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.666509 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-utilities" (OuterVolumeSpecName: "utilities") pod "e3c784d2-fc47-4ff8-b47e-e05fcf89613a" (UID: "e3c784d2-fc47-4ff8-b47e-e05fcf89613a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.671868 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-kube-api-access-vwcqz" (OuterVolumeSpecName: "kube-api-access-vwcqz") pod "e3c784d2-fc47-4ff8-b47e-e05fcf89613a" (UID: "e3c784d2-fc47-4ff8-b47e-e05fcf89613a"). InnerVolumeSpecName "kube-api-access-vwcqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.767385 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.767430 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwcqz\" (UniqueName: \"kubernetes.io/projected/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-kube-api-access-vwcqz\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.774709 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3c784d2-fc47-4ff8-b47e-e05fcf89613a" (UID: "e3c784d2-fc47-4ff8-b47e-e05fcf89613a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:25:37 crc kubenswrapper[4909]: I1126 07:25:37.868690 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c784d2-fc47-4ff8-b47e-e05fcf89613a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.184719 4909 generic.go:334] "Generic (PLEG): container finished" podID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerID="2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e" exitCode=0 Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.184774 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjkw9" event={"ID":"e3c784d2-fc47-4ff8-b47e-e05fcf89613a","Type":"ContainerDied","Data":"2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e"} Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.184823 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjkw9" event={"ID":"e3c784d2-fc47-4ff8-b47e-e05fcf89613a","Type":"ContainerDied","Data":"d1d6aa1944f2bc2f3e3aedc93cb4bd5bc3d1a3ddafc93d1bde87eefd71d4f3d7"} Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.184900 4909 scope.go:117] "RemoveContainer" containerID="2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.185698 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjkw9" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.187085 4909 generic.go:334] "Generic (PLEG): container finished" podID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerID="550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944" exitCode=0 Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.187116 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmqrq" event={"ID":"b4d456cd-a10d-4a92-a7b2-ab6269f7297d","Type":"ContainerDied","Data":"550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944"} Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.203950 4909 scope.go:117] "RemoveContainer" containerID="ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.222440 4909 scope.go:117] "RemoveContainer" containerID="e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.234272 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjkw9"] Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.240489 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kjkw9"] Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.251643 4909 scope.go:117] "RemoveContainer" containerID="2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e" Nov 26 07:25:38 crc kubenswrapper[4909]: E1126 07:25:38.252154 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e\": container with ID starting with 2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e not found: ID does not exist" containerID="2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.252238 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e"} err="failed to get container status \"2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e\": rpc error: code = NotFound desc = could not find container \"2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e\": container with ID starting with 2701a69041eec163deeb6c3f96dd01c330c5661798982b474067bc8c8f21242e not found: ID does not exist" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.252311 4909 scope.go:117] "RemoveContainer" containerID="ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848" Nov 26 07:25:38 crc kubenswrapper[4909]: E1126 07:25:38.253733 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848\": container with ID starting with ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848 not found: ID does not exist" containerID="ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.253804 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848"} err="failed to get container status \"ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848\": rpc error: code = NotFound desc = could not find container \"ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848\": container with ID starting with ef15968465c5960d2828fa9ed718cd86e5bc4fa369285f8fa60b3bd8dcec6848 not found: ID does not exist" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.253836 4909 scope.go:117] "RemoveContainer" containerID="e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c" Nov 26 07:25:38 crc kubenswrapper[4909]: E1126 07:25:38.254146 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c\": container with ID starting with e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c not found: ID does not exist" containerID="e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.254243 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c"} err="failed to get container status \"e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c\": rpc error: code = NotFound desc = could not find container \"e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c\": container with ID starting with e973c9cf225406054e477595208778f4ee54171effd5a7c5ffc230760e24281c not found: ID does not exist" Nov 26 07:25:38 crc kubenswrapper[4909]: I1126 07:25:38.506368 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" path="/var/lib/kubelet/pods/e3c784d2-fc47-4ff8-b47e-e05fcf89613a/volumes" Nov 26 07:25:39 crc kubenswrapper[4909]: I1126 07:25:39.200115 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmqrq" event={"ID":"b4d456cd-a10d-4a92-a7b2-ab6269f7297d","Type":"ContainerStarted","Data":"c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a"} Nov 26 07:25:39 crc kubenswrapper[4909]: I1126 07:25:39.227371 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mmqrq" podStartSLOduration=2.77801766 podStartE2EDuration="5.227354812s" podCreationTimestamp="2025-11-26 07:25:34 +0000 UTC" firstStartedPulling="2025-11-26 07:25:36.165508339 +0000 UTC m=+1508.311719505" lastFinishedPulling="2025-11-26 07:25:38.614845491 +0000 UTC m=+1510.761056657" observedRunningTime="2025-11-26 07:25:39.224011391 +0000 UTC m=+1511.370222557" watchObservedRunningTime="2025-11-26 07:25:39.227354812 +0000 UTC m=+1511.373565978" Nov 26 07:25:40 crc kubenswrapper[4909]: I1126 07:25:40.531144 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:25:40 crc kubenswrapper[4909]: I1126 07:25:40.537041 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:25:40 crc kubenswrapper[4909]: I1126 07:25:40.695453 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:25:41 crc kubenswrapper[4909]: I1126 07:25:41.220646 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 07:25:42 crc kubenswrapper[4909]: I1126 07:25:42.275829 4909 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 26 07:25:42 crc kubenswrapper[4909]: I1126 07:25:42.608774 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58dcdd989d-ctkx2" Nov 26 07:25:42 crc kubenswrapper[4909]: I1126 07:25:42.653096 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-dhp84" Nov 26 07:25:44 crc kubenswrapper[4909]: I1126 07:25:44.992876 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:44 crc kubenswrapper[4909]: I1126 07:25:44.993288 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:45 crc kubenswrapper[4909]: I1126 07:25:45.067369 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:25:45 crc kubenswrapper[4909]: I1126 07:25:45.311698 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.714776 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c7trt"] Nov 26 07:26:04 crc kubenswrapper[4909]: E1126 07:26:04.715827 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="registry-server" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.715843 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="registry-server" Nov 26 07:26:04 crc kubenswrapper[4909]: E1126 07:26:04.715883 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="extract-content" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.715891 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="extract-content" Nov 26 07:26:04 crc kubenswrapper[4909]: E1126 07:26:04.715901 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="extract-utilities" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.715909 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="extract-utilities" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.716072 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c784d2-fc47-4ff8-b47e-e05fcf89613a" containerName="registry-server" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.717252 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.726060 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c7trt"] Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.915919 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-catalog-content\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.916309 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5zh4\" (UniqueName: \"kubernetes.io/projected/05b80a4e-fb4d-453e-a4df-f583987f8533-kube-api-access-l5zh4\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:04 crc kubenswrapper[4909]: I1126 07:26:04.916353 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-utilities\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.017226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5zh4\" (UniqueName: \"kubernetes.io/projected/05b80a4e-fb4d-453e-a4df-f583987f8533-kube-api-access-l5zh4\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.017311 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-utilities\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.017338 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-catalog-content\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.017961 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-catalog-content\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.018495 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-utilities\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.042908 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5zh4\" (UniqueName: \"kubernetes.io/projected/05b80a4e-fb4d-453e-a4df-f583987f8533-kube-api-access-l5zh4\") pod \"community-operators-c7trt\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.337635 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.502658 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wbqfg"] Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.504897 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.518524 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbqfg"] Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.626453 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-utilities\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.626694 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-catalog-content\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.626898 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lff22\" (UniqueName: \"kubernetes.io/projected/1a720e79-f385-40a0-a73c-5298c3b2596b-kube-api-access-lff22\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.730774 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lff22\" (UniqueName: \"kubernetes.io/projected/1a720e79-f385-40a0-a73c-5298c3b2596b-kube-api-access-lff22\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.730877 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-utilities\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.730992 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-catalog-content\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.731327 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-utilities\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.731362 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-catalog-content\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.767978 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lff22\" (UniqueName: \"kubernetes.io/projected/1a720e79-f385-40a0-a73c-5298c3b2596b-kube-api-access-lff22\") pod \"community-operators-wbqfg\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.795768 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c7trt"] Nov 26 07:26:05 crc kubenswrapper[4909]: W1126 07:26:05.802709 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b80a4e_fb4d_453e_a4df_f583987f8533.slice/crio-e489e2e139c505afc184d746f1fbfaba12cae84005be53d477663a932ce74920 WatchSource:0}: Error finding container e489e2e139c505afc184d746f1fbfaba12cae84005be53d477663a932ce74920: Status 404 returned error can't find the container with id e489e2e139c505afc184d746f1fbfaba12cae84005be53d477663a932ce74920 Nov 26 07:26:05 crc kubenswrapper[4909]: I1126 07:26:05.831967 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.275020 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbqfg"] Nov 26 07:26:06 crc kubenswrapper[4909]: W1126 07:26:06.280334 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a720e79_f385_40a0_a73c_5298c3b2596b.slice/crio-882d1e22fae694ae481ba6d7b94eed55c4b40ad8361b46bf4674127c5f85ef8f WatchSource:0}: Error finding container 882d1e22fae694ae481ba6d7b94eed55c4b40ad8361b46bf4674127c5f85ef8f: Status 404 returned error can't find the container with id 882d1e22fae694ae481ba6d7b94eed55c4b40ad8361b46bf4674127c5f85ef8f Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.427057 4909 generic.go:334] "Generic (PLEG): container finished" podID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerID="541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba" exitCode=0 Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.427176 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7trt" event={"ID":"05b80a4e-fb4d-453e-a4df-f583987f8533","Type":"ContainerDied","Data":"541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba"} Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.427235 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7trt" event={"ID":"05b80a4e-fb4d-453e-a4df-f583987f8533","Type":"ContainerStarted","Data":"e489e2e139c505afc184d746f1fbfaba12cae84005be53d477663a932ce74920"} Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.432849 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbqfg" event={"ID":"1a720e79-f385-40a0-a73c-5298c3b2596b","Type":"ContainerStarted","Data":"882d1e22fae694ae481ba6d7b94eed55c4b40ad8361b46bf4674127c5f85ef8f"} Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.690162 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9cttw"] Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.692104 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.701312 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9cttw"] Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.756838 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-utilities\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.756968 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-kube-api-access-7xlfb\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.757078 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-catalog-content\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.858315 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-utilities\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.858443 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-kube-api-access-7xlfb\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.858522 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-catalog-content\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.859227 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-utilities\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.859298 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-catalog-content\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:06 crc kubenswrapper[4909]: I1126 07:26:06.879221 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-kube-api-access-7xlfb\") pod \"community-operators-9cttw\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.009559 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.301295 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.301912 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.302036 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.303531 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.303654 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" gracePeriod=600 Nov 26 07:26:07 crc kubenswrapper[4909]: E1126 07:26:07.438356 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.445029 4909 generic.go:334] "Generic (PLEG): container finished" podID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerID="2113241a4535dd64013c547d44e7939afd7aac06133fd6ae12d9baab30eebffb" exitCode=0 Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.445101 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbqfg" event={"ID":"1a720e79-f385-40a0-a73c-5298c3b2596b","Type":"ContainerDied","Data":"2113241a4535dd64013c547d44e7939afd7aac06133fd6ae12d9baab30eebffb"} Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.449816 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" exitCode=0 Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.449887 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246"} Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.449927 4909 scope.go:117] "RemoveContainer" containerID="01a4d185a8d7c30690fef08cf37e1461869ce637ebe4ac1e55eebb9783625426" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.450548 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:26:07 crc kubenswrapper[4909]: E1126 07:26:07.450813 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.456871 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7trt" event={"ID":"05b80a4e-fb4d-453e-a4df-f583987f8533","Type":"ContainerStarted","Data":"584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6"} Nov 26 07:26:07 crc kubenswrapper[4909]: I1126 07:26:07.457945 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9cttw"] Nov 26 07:26:08 crc kubenswrapper[4909]: I1126 07:26:08.465814 4909 generic.go:334] "Generic (PLEG): container finished" podID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerID="584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6" exitCode=0 Nov 26 07:26:08 crc kubenswrapper[4909]: I1126 07:26:08.465872 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7trt" event={"ID":"05b80a4e-fb4d-453e-a4df-f583987f8533","Type":"ContainerDied","Data":"584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6"} Nov 26 07:26:08 crc kubenswrapper[4909]: I1126 07:26:08.467718 4909 generic.go:334] "Generic (PLEG): container finished" podID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerID="27c4b71f1b9b7214b9e115e4770ae54c79af478bb85e4b6bed6d4f7ad7f2be8c" exitCode=0 Nov 26 07:26:08 crc kubenswrapper[4909]: I1126 07:26:08.467797 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cttw" event={"ID":"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36","Type":"ContainerDied","Data":"27c4b71f1b9b7214b9e115e4770ae54c79af478bb85e4b6bed6d4f7ad7f2be8c"} Nov 26 07:26:08 crc kubenswrapper[4909]: I1126 07:26:08.467826 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cttw" event={"ID":"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36","Type":"ContainerStarted","Data":"add0beea53294ab058efc433bf31ae7b07490b393552df0babedc8489a209b3f"} Nov 26 07:26:08 crc kubenswrapper[4909]: I1126 07:26:08.471487 4909 generic.go:334] "Generic (PLEG): container finished" podID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerID="16c6710dc2e2ccf8fb4069d4dba5aa20ff97cbe2c23b0fd89f10c19a16d4299b" exitCode=0 Nov 26 07:26:08 crc kubenswrapper[4909]: I1126 07:26:08.471551 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbqfg" event={"ID":"1a720e79-f385-40a0-a73c-5298c3b2596b","Type":"ContainerDied","Data":"16c6710dc2e2ccf8fb4069d4dba5aa20ff97cbe2c23b0fd89f10c19a16d4299b"} Nov 26 07:26:09 crc kubenswrapper[4909]: I1126 07:26:09.484757 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7trt" event={"ID":"05b80a4e-fb4d-453e-a4df-f583987f8533","Type":"ContainerStarted","Data":"29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b"} Nov 26 07:26:09 crc kubenswrapper[4909]: I1126 07:26:09.486493 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cttw" event={"ID":"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36","Type":"ContainerStarted","Data":"2b6adc139cf8f5a96deb43c1038347c52daaa73394385d4175ead2d328f25932"} Nov 26 07:26:09 crc kubenswrapper[4909]: I1126 07:26:09.488268 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbqfg" event={"ID":"1a720e79-f385-40a0-a73c-5298c3b2596b","Type":"ContainerStarted","Data":"bb50038703c615cd88b8c4b6ebdb732f82baf5ad3a6c25460b78e0cf6ea40846"} Nov 26 07:26:09 crc kubenswrapper[4909]: I1126 07:26:09.502795 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c7trt" podStartSLOduration=3.080271145 podStartE2EDuration="5.502779223s" podCreationTimestamp="2025-11-26 07:26:04 +0000 UTC" firstStartedPulling="2025-11-26 07:26:06.428366397 +0000 UTC m=+1538.574577563" lastFinishedPulling="2025-11-26 07:26:08.850874485 +0000 UTC m=+1540.997085641" observedRunningTime="2025-11-26 07:26:09.502053414 +0000 UTC m=+1541.648264580" watchObservedRunningTime="2025-11-26 07:26:09.502779223 +0000 UTC m=+1541.648990389" Nov 26 07:26:09 crc kubenswrapper[4909]: I1126 07:26:09.530573 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wbqfg" podStartSLOduration=2.902099712 podStartE2EDuration="4.530555733s" podCreationTimestamp="2025-11-26 07:26:05 +0000 UTC" firstStartedPulling="2025-11-26 07:26:07.446879547 +0000 UTC m=+1539.593090713" lastFinishedPulling="2025-11-26 07:26:09.075335568 +0000 UTC m=+1541.221546734" observedRunningTime="2025-11-26 07:26:09.524822997 +0000 UTC m=+1541.671034173" watchObservedRunningTime="2025-11-26 07:26:09.530555733 +0000 UTC m=+1541.676766899" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.491190 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4c5r4"] Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.493078 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.500897 4909 generic.go:334] "Generic (PLEG): container finished" podID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerID="2b6adc139cf8f5a96deb43c1038347c52daaa73394385d4175ead2d328f25932" exitCode=0 Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.522368 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cttw" event={"ID":"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36","Type":"ContainerDied","Data":"2b6adc139cf8f5a96deb43c1038347c52daaa73394385d4175ead2d328f25932"} Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.522414 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4c5r4"] Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.625290 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-catalog-content\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.625345 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-utilities\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.625413 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxbt\" (UniqueName: \"kubernetes.io/projected/b19d0268-905b-486f-835f-4b1d3d293940-kube-api-access-tvxbt\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.726781 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxbt\" (UniqueName: \"kubernetes.io/projected/b19d0268-905b-486f-835f-4b1d3d293940-kube-api-access-tvxbt\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.726866 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-catalog-content\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.726889 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-utilities\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.727398 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-catalog-content\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.727430 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-utilities\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.757774 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxbt\" (UniqueName: \"kubernetes.io/projected/b19d0268-905b-486f-835f-4b1d3d293940-kube-api-access-tvxbt\") pod \"community-operators-4c5r4\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:10 crc kubenswrapper[4909]: I1126 07:26:10.812818 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:11 crc kubenswrapper[4909]: I1126 07:26:11.298123 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4c5r4"] Nov 26 07:26:11 crc kubenswrapper[4909]: W1126 07:26:11.305531 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb19d0268_905b_486f_835f_4b1d3d293940.slice/crio-3572a7b2bce33b8ed1327e34e7d5f1f8e19d311370dd85ece14fc05620d3b431 WatchSource:0}: Error finding container 3572a7b2bce33b8ed1327e34e7d5f1f8e19d311370dd85ece14fc05620d3b431: Status 404 returned error can't find the container with id 3572a7b2bce33b8ed1327e34e7d5f1f8e19d311370dd85ece14fc05620d3b431 Nov 26 07:26:11 crc kubenswrapper[4909]: I1126 07:26:11.511293 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cttw" event={"ID":"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36","Type":"ContainerStarted","Data":"4ffbaf3d35fc0c0ff83d07319706c50d46035cb35c3379e73d03363beecb8099"} Nov 26 07:26:11 crc kubenswrapper[4909]: I1126 07:26:11.513988 4909 generic.go:334] "Generic (PLEG): container finished" podID="b19d0268-905b-486f-835f-4b1d3d293940" containerID="36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5" exitCode=0 Nov 26 07:26:11 crc kubenswrapper[4909]: I1126 07:26:11.514034 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c5r4" event={"ID":"b19d0268-905b-486f-835f-4b1d3d293940","Type":"ContainerDied","Data":"36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5"} Nov 26 07:26:11 crc kubenswrapper[4909]: I1126 07:26:11.514060 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c5r4" event={"ID":"b19d0268-905b-486f-835f-4b1d3d293940","Type":"ContainerStarted","Data":"3572a7b2bce33b8ed1327e34e7d5f1f8e19d311370dd85ece14fc05620d3b431"} Nov 26 07:26:11 crc kubenswrapper[4909]: I1126 07:26:11.535656 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9cttw" podStartSLOduration=3.11895061 podStartE2EDuration="5.535634809s" podCreationTimestamp="2025-11-26 07:26:06 +0000 UTC" firstStartedPulling="2025-11-26 07:26:08.468868392 +0000 UTC m=+1540.615079558" lastFinishedPulling="2025-11-26 07:26:10.885552591 +0000 UTC m=+1543.031763757" observedRunningTime="2025-11-26 07:26:11.533245253 +0000 UTC m=+1543.679456419" watchObservedRunningTime="2025-11-26 07:26:11.535634809 +0000 UTC m=+1543.681845985" Nov 26 07:26:12 crc kubenswrapper[4909]: I1126 07:26:12.540141 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c5r4" event={"ID":"b19d0268-905b-486f-835f-4b1d3d293940","Type":"ContainerStarted","Data":"b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e"} Nov 26 07:26:13 crc kubenswrapper[4909]: I1126 07:26:13.550362 4909 generic.go:334] "Generic (PLEG): container finished" podID="b19d0268-905b-486f-835f-4b1d3d293940" containerID="b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e" exitCode=0 Nov 26 07:26:13 crc kubenswrapper[4909]: I1126 07:26:13.550464 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c5r4" event={"ID":"b19d0268-905b-486f-835f-4b1d3d293940","Type":"ContainerDied","Data":"b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e"} Nov 26 07:26:14 crc kubenswrapper[4909]: I1126 07:26:14.564155 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c5r4" event={"ID":"b19d0268-905b-486f-835f-4b1d3d293940","Type":"ContainerStarted","Data":"a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a"} Nov 26 07:26:14 crc kubenswrapper[4909]: I1126 07:26:14.597617 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4c5r4" podStartSLOduration=2.11338874 podStartE2EDuration="4.597588015s" podCreationTimestamp="2025-11-26 07:26:10 +0000 UTC" firstStartedPulling="2025-11-26 07:26:11.515287323 +0000 UTC m=+1543.661498499" lastFinishedPulling="2025-11-26 07:26:13.999486608 +0000 UTC m=+1546.145697774" observedRunningTime="2025-11-26 07:26:14.594562263 +0000 UTC m=+1546.740773429" watchObservedRunningTime="2025-11-26 07:26:14.597588015 +0000 UTC m=+1546.743799181" Nov 26 07:26:15 crc kubenswrapper[4909]: I1126 07:26:15.338036 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:15 crc kubenswrapper[4909]: I1126 07:26:15.338085 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:15 crc kubenswrapper[4909]: I1126 07:26:15.376714 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:15 crc kubenswrapper[4909]: I1126 07:26:15.615824 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:15 crc kubenswrapper[4909]: I1126 07:26:15.832653 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:15 crc kubenswrapper[4909]: I1126 07:26:15.832797 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:15 crc kubenswrapper[4909]: I1126 07:26:15.875672 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:16 crc kubenswrapper[4909]: I1126 07:26:16.623618 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:17 crc kubenswrapper[4909]: I1126 07:26:17.010219 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:17 crc kubenswrapper[4909]: I1126 07:26:17.010276 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:17 crc kubenswrapper[4909]: I1126 07:26:17.051665 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:17 crc kubenswrapper[4909]: I1126 07:26:17.626396 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.299098 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7kdz"] Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.303423 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.350637 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7kdz"] Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.374441 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-catalog-content\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.374528 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2zd5\" (UniqueName: \"kubernetes.io/projected/620cd68b-58b5-46bf-9389-0e238b55ef9e-kube-api-access-w2zd5\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.374574 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-utilities\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.475658 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-catalog-content\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.475714 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2zd5\" (UniqueName: \"kubernetes.io/projected/620cd68b-58b5-46bf-9389-0e238b55ef9e-kube-api-access-w2zd5\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.475795 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-utilities\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.476201 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-catalog-content\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.476279 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-utilities\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.493783 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2zd5\" (UniqueName: \"kubernetes.io/projected/620cd68b-58b5-46bf-9389-0e238b55ef9e-kube-api-access-w2zd5\") pod \"community-operators-q7kdz\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.624853 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.817722 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.818021 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:20 crc kubenswrapper[4909]: I1126 07:26:20.898347 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:21 crc kubenswrapper[4909]: I1126 07:26:21.113736 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7kdz"] Nov 26 07:26:21 crc kubenswrapper[4909]: I1126 07:26:21.498750 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:26:21 crc kubenswrapper[4909]: E1126 07:26:21.498995 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:26:21 crc kubenswrapper[4909]: I1126 07:26:21.617167 4909 generic.go:334] "Generic (PLEG): container finished" podID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerID="950917fadeb2594ad61447a4a219d0b5925541605ddf11b6afd54d7f8ace04b0" exitCode=0 Nov 26 07:26:21 crc kubenswrapper[4909]: I1126 07:26:21.617291 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7kdz" event={"ID":"620cd68b-58b5-46bf-9389-0e238b55ef9e","Type":"ContainerDied","Data":"950917fadeb2594ad61447a4a219d0b5925541605ddf11b6afd54d7f8ace04b0"} Nov 26 07:26:21 crc kubenswrapper[4909]: I1126 07:26:21.617350 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7kdz" event={"ID":"620cd68b-58b5-46bf-9389-0e238b55ef9e","Type":"ContainerStarted","Data":"99e705f0fd620640fb38ce12318248d014842277db9da08edab53edd27b375b6"} Nov 26 07:26:21 crc kubenswrapper[4909]: I1126 07:26:21.680217 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:22 crc kubenswrapper[4909]: I1126 07:26:22.625875 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7kdz" event={"ID":"620cd68b-58b5-46bf-9389-0e238b55ef9e","Type":"ContainerStarted","Data":"ffc432400995308c705e0091716065f417991434adc602473a63c77b1cb09986"} Nov 26 07:26:22 crc kubenswrapper[4909]: I1126 07:26:22.892776 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rj8n2"] Nov 26 07:26:22 crc kubenswrapper[4909]: I1126 07:26:22.894245 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:22 crc kubenswrapper[4909]: I1126 07:26:22.901677 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj8n2"] Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.013307 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-utilities\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.013660 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4499\" (UniqueName: \"kubernetes.io/projected/04fdef56-93e1-4254-89bb-9e27aad42099-kube-api-access-r4499\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.013836 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-catalog-content\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.115030 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-utilities\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.115320 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4499\" (UniqueName: \"kubernetes.io/projected/04fdef56-93e1-4254-89bb-9e27aad42099-kube-api-access-r4499\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.115424 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-catalog-content\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.116106 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-catalog-content\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.116439 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-utilities\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.137351 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4499\" (UniqueName: \"kubernetes.io/projected/04fdef56-93e1-4254-89bb-9e27aad42099-kube-api-access-r4499\") pod \"community-operators-rj8n2\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.217016 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.634809 4909 generic.go:334] "Generic (PLEG): container finished" podID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerID="ffc432400995308c705e0091716065f417991434adc602473a63c77b1cb09986" exitCode=0 Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.634850 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7kdz" event={"ID":"620cd68b-58b5-46bf-9389-0e238b55ef9e","Type":"ContainerDied","Data":"ffc432400995308c705e0091716065f417991434adc602473a63c77b1cb09986"} Nov 26 07:26:23 crc kubenswrapper[4909]: I1126 07:26:23.645240 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj8n2"] Nov 26 07:26:23 crc kubenswrapper[4909]: W1126 07:26:23.656179 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04fdef56_93e1_4254_89bb_9e27aad42099.slice/crio-33dfc4b4535a55b28ed707d5dfbb402ac96ce135cfc9f12e99b86f2f01eb453b WatchSource:0}: Error finding container 33dfc4b4535a55b28ed707d5dfbb402ac96ce135cfc9f12e99b86f2f01eb453b: Status 404 returned error can't find the container with id 33dfc4b4535a55b28ed707d5dfbb402ac96ce135cfc9f12e99b86f2f01eb453b Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.090310 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khn47"] Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.092533 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.097940 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khn47"] Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.130363 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-catalog-content\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.130413 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-utilities\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.130460 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qh5q\" (UniqueName: \"kubernetes.io/projected/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-kube-api-access-9qh5q\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.231925 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-catalog-content\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.232224 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-utilities\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.232297 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qh5q\" (UniqueName: \"kubernetes.io/projected/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-kube-api-access-9qh5q\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.232707 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-catalog-content\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.232746 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-utilities\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.260927 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qh5q\" (UniqueName: \"kubernetes.io/projected/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-kube-api-access-9qh5q\") pod \"community-operators-khn47\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.452899 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.649884 4909 generic.go:334] "Generic (PLEG): container finished" podID="04fdef56-93e1-4254-89bb-9e27aad42099" containerID="9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90" exitCode=0 Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.650041 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n2" event={"ID":"04fdef56-93e1-4254-89bb-9e27aad42099","Type":"ContainerDied","Data":"9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90"} Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.650236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n2" event={"ID":"04fdef56-93e1-4254-89bb-9e27aad42099","Type":"ContainerStarted","Data":"33dfc4b4535a55b28ed707d5dfbb402ac96ce135cfc9f12e99b86f2f01eb453b"} Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.663798 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7kdz" event={"ID":"620cd68b-58b5-46bf-9389-0e238b55ef9e","Type":"ContainerStarted","Data":"fd7954afebd7febe6c6d19a2108e891f060e0f5109f6d8f500d634eb0eb52c1c"} Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.697024 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7kdz" podStartSLOduration=2.2846205299999998 podStartE2EDuration="4.697005841s" podCreationTimestamp="2025-11-26 07:26:20 +0000 UTC" firstStartedPulling="2025-11-26 07:26:21.618751219 +0000 UTC m=+1553.764962395" lastFinishedPulling="2025-11-26 07:26:24.03113654 +0000 UTC m=+1556.177347706" observedRunningTime="2025-11-26 07:26:24.69514505 +0000 UTC m=+1556.841356216" watchObservedRunningTime="2025-11-26 07:26:24.697005841 +0000 UTC m=+1556.843217007" Nov 26 07:26:24 crc kubenswrapper[4909]: I1126 07:26:24.723718 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khn47"] Nov 26 07:26:24 crc kubenswrapper[4909]: W1126 07:26:24.725530 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8dc5e21_d9a1_4d94_afbe_b1e71b8b27a7.slice/crio-463d49c35ab931b1d8059331b72fce9ddebd0c131bbbfee5ba18d9ba16b3f9b8 WatchSource:0}: Error finding container 463d49c35ab931b1d8059331b72fce9ddebd0c131bbbfee5ba18d9ba16b3f9b8: Status 404 returned error can't find the container with id 463d49c35ab931b1d8059331b72fce9ddebd0c131bbbfee5ba18d9ba16b3f9b8 Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.304861 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4wltp"] Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.307007 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.316614 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4wltp"] Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.449150 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gd4q\" (UniqueName: \"kubernetes.io/projected/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-kube-api-access-9gd4q\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.449536 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-catalog-content\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.449565 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-utilities\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.550893 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gd4q\" (UniqueName: \"kubernetes.io/projected/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-kube-api-access-9gd4q\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.550981 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-catalog-content\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.551007 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-utilities\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.551614 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-utilities\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.551615 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-catalog-content\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.573749 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gd4q\" (UniqueName: \"kubernetes.io/projected/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-kube-api-access-9gd4q\") pod \"community-operators-4wltp\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.642532 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.679455 4909 generic.go:334] "Generic (PLEG): container finished" podID="04fdef56-93e1-4254-89bb-9e27aad42099" containerID="dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2" exitCode=0 Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.679540 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n2" event={"ID":"04fdef56-93e1-4254-89bb-9e27aad42099","Type":"ContainerDied","Data":"dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2"} Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.681105 4909 generic.go:334] "Generic (PLEG): container finished" podID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerID="68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a" exitCode=0 Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.682066 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khn47" event={"ID":"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7","Type":"ContainerDied","Data":"68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a"} Nov 26 07:26:25 crc kubenswrapper[4909]: I1126 07:26:25.682089 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khn47" event={"ID":"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7","Type":"ContainerStarted","Data":"463d49c35ab931b1d8059331b72fce9ddebd0c131bbbfee5ba18d9ba16b3f9b8"} Nov 26 07:26:26 crc kubenswrapper[4909]: W1126 07:26:26.097445 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc02c9c50_c2fd_4e2a_90c6_38c0a46ec7d0.slice/crio-c295da3747e61969f00be5d23edb8aa304f79c71d204ae67d21eb4279decde4f WatchSource:0}: Error finding container c295da3747e61969f00be5d23edb8aa304f79c71d204ae67d21eb4279decde4f: Status 404 returned error can't find the container with id c295da3747e61969f00be5d23edb8aa304f79c71d204ae67d21eb4279decde4f Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.109070 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4wltp"] Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.692357 4909 generic.go:334] "Generic (PLEG): container finished" podID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerID="62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a" exitCode=0 Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.692406 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khn47" event={"ID":"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7","Type":"ContainerDied","Data":"62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a"} Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.694838 4909 generic.go:334] "Generic (PLEG): container finished" podID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerID="86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db" exitCode=0 Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.694911 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4wltp" event={"ID":"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0","Type":"ContainerDied","Data":"86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db"} Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.694933 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4wltp" event={"ID":"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0","Type":"ContainerStarted","Data":"c295da3747e61969f00be5d23edb8aa304f79c71d204ae67d21eb4279decde4f"} Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.698249 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n2" event={"ID":"04fdef56-93e1-4254-89bb-9e27aad42099","Type":"ContainerStarted","Data":"3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29"} Nov 26 07:26:26 crc kubenswrapper[4909]: I1126 07:26:26.736134 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rj8n2" podStartSLOduration=3.271897102 podStartE2EDuration="4.736114179s" podCreationTimestamp="2025-11-26 07:26:22 +0000 UTC" firstStartedPulling="2025-11-26 07:26:24.651752453 +0000 UTC m=+1556.797963619" lastFinishedPulling="2025-11-26 07:26:26.11596953 +0000 UTC m=+1558.262180696" observedRunningTime="2025-11-26 07:26:26.728756727 +0000 UTC m=+1558.874967903" watchObservedRunningTime="2025-11-26 07:26:26.736114179 +0000 UTC m=+1558.882325355" Nov 26 07:26:27 crc kubenswrapper[4909]: I1126 07:26:27.684453 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nc94b"] Nov 26 07:26:27 crc kubenswrapper[4909]: I1126 07:26:27.685035 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nc94b" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="registry-server" containerID="cri-o://ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6" gracePeriod=2 Nov 26 07:26:27 crc kubenswrapper[4909]: I1126 07:26:27.711218 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khn47" event={"ID":"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7","Type":"ContainerStarted","Data":"a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89"} Nov 26 07:26:27 crc kubenswrapper[4909]: I1126 07:26:27.715625 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4wltp" event={"ID":"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0","Type":"ContainerStarted","Data":"8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e"} Nov 26 07:26:27 crc kubenswrapper[4909]: I1126 07:26:27.733168 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khn47" podStartSLOduration=2.317108744 podStartE2EDuration="3.733137421s" podCreationTimestamp="2025-11-26 07:26:24 +0000 UTC" firstStartedPulling="2025-11-26 07:26:25.682707454 +0000 UTC m=+1557.828918620" lastFinishedPulling="2025-11-26 07:26:27.098736121 +0000 UTC m=+1559.244947297" observedRunningTime="2025-11-26 07:26:27.732831712 +0000 UTC m=+1559.879042888" watchObservedRunningTime="2025-11-26 07:26:27.733137421 +0000 UTC m=+1559.879348627" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.166426 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.291331 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-catalog-content\") pod \"7827358f-2d3b-47de-9f4e-80e0fbd67758\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.291385 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-utilities\") pod \"7827358f-2d3b-47de-9f4e-80e0fbd67758\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.291464 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42ffd\" (UniqueName: \"kubernetes.io/projected/7827358f-2d3b-47de-9f4e-80e0fbd67758-kube-api-access-42ffd\") pod \"7827358f-2d3b-47de-9f4e-80e0fbd67758\" (UID: \"7827358f-2d3b-47de-9f4e-80e0fbd67758\") " Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.292125 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-utilities" (OuterVolumeSpecName: "utilities") pod "7827358f-2d3b-47de-9f4e-80e0fbd67758" (UID: "7827358f-2d3b-47de-9f4e-80e0fbd67758"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.302959 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7827358f-2d3b-47de-9f4e-80e0fbd67758-kube-api-access-42ffd" (OuterVolumeSpecName: "kube-api-access-42ffd") pod "7827358f-2d3b-47de-9f4e-80e0fbd67758" (UID: "7827358f-2d3b-47de-9f4e-80e0fbd67758"). InnerVolumeSpecName "kube-api-access-42ffd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.340357 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7827358f-2d3b-47de-9f4e-80e0fbd67758" (UID: "7827358f-2d3b-47de-9f4e-80e0fbd67758"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.392936 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42ffd\" (UniqueName: \"kubernetes.io/projected/7827358f-2d3b-47de-9f4e-80e0fbd67758-kube-api-access-42ffd\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.392968 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.392978 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7827358f-2d3b-47de-9f4e-80e0fbd67758-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.724003 4909 generic.go:334] "Generic (PLEG): container finished" podID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerID="8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e" exitCode=0 Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.724097 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4wltp" event={"ID":"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0","Type":"ContainerDied","Data":"8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e"} Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.726040 4909 generic.go:334] "Generic (PLEG): container finished" podID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerID="ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6" exitCode=0 Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.726136 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nc94b" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.726220 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nc94b" event={"ID":"7827358f-2d3b-47de-9f4e-80e0fbd67758","Type":"ContainerDied","Data":"ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6"} Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.726265 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nc94b" event={"ID":"7827358f-2d3b-47de-9f4e-80e0fbd67758","Type":"ContainerDied","Data":"0787fdaa2005b65517c4a2d92d0e354bb05ca9dc0b7ceabbb5dd876f8ea9eb06"} Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.726293 4909 scope.go:117] "RemoveContainer" containerID="ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.758860 4909 scope.go:117] "RemoveContainer" containerID="7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.774520 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nc94b"] Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.782894 4909 scope.go:117] "RemoveContainer" containerID="20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.783173 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nc94b"] Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.819067 4909 scope.go:117] "RemoveContainer" containerID="ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6" Nov 26 07:26:28 crc kubenswrapper[4909]: E1126 07:26:28.819764 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6\": container with ID starting with ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6 not found: ID does not exist" containerID="ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.819797 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6"} err="failed to get container status \"ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6\": rpc error: code = NotFound desc = could not find container \"ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6\": container with ID starting with ba317c5006cb5ccdf15f51906108562dcb70818febcbcee5b8aca93fffe224d6 not found: ID does not exist" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.819822 4909 scope.go:117] "RemoveContainer" containerID="7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b" Nov 26 07:26:28 crc kubenswrapper[4909]: E1126 07:26:28.821262 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b\": container with ID starting with 7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b not found: ID does not exist" containerID="7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.821284 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b"} err="failed to get container status \"7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b\": rpc error: code = NotFound desc = could not find container \"7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b\": container with ID starting with 7215ea885c2b8baedb2fe57dcfb66b1124839ebecddc41babcced59b6f1e691b not found: ID does not exist" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.821296 4909 scope.go:117] "RemoveContainer" containerID="20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056" Nov 26 07:26:28 crc kubenswrapper[4909]: E1126 07:26:28.821533 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056\": container with ID starting with 20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056 not found: ID does not exist" containerID="20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056" Nov 26 07:26:28 crc kubenswrapper[4909]: I1126 07:26:28.821574 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056"} err="failed to get container status \"20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056\": rpc error: code = NotFound desc = could not find container \"20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056\": container with ID starting with 20c77d9c08453474eefb84483edc058840754e1d9b32217496a626a0e3b4e056 not found: ID does not exist" Nov 26 07:26:29 crc kubenswrapper[4909]: I1126 07:26:29.736822 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4wltp" event={"ID":"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0","Type":"ContainerStarted","Data":"2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad"} Nov 26 07:26:29 crc kubenswrapper[4909]: I1126 07:26:29.761202 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4wltp" podStartSLOduration=2.26241171 podStartE2EDuration="4.761180665s" podCreationTimestamp="2025-11-26 07:26:25 +0000 UTC" firstStartedPulling="2025-11-26 07:26:26.697999705 +0000 UTC m=+1558.844210871" lastFinishedPulling="2025-11-26 07:26:29.19676866 +0000 UTC m=+1561.342979826" observedRunningTime="2025-11-26 07:26:29.754399119 +0000 UTC m=+1561.900610305" watchObservedRunningTime="2025-11-26 07:26:29.761180665 +0000 UTC m=+1561.907391831" Nov 26 07:26:30 crc kubenswrapper[4909]: I1126 07:26:30.510579 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" path="/var/lib/kubelet/pods/7827358f-2d3b-47de-9f4e-80e0fbd67758/volumes" Nov 26 07:26:30 crc kubenswrapper[4909]: I1126 07:26:30.625313 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:30 crc kubenswrapper[4909]: I1126 07:26:30.625578 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:30 crc kubenswrapper[4909]: I1126 07:26:30.685185 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:30 crc kubenswrapper[4909]: I1126 07:26:30.798118 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:33 crc kubenswrapper[4909]: I1126 07:26:33.218151 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:33 crc kubenswrapper[4909]: I1126 07:26:33.218214 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:33 crc kubenswrapper[4909]: I1126 07:26:33.274315 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:33 crc kubenswrapper[4909]: I1126 07:26:33.836158 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.108000 4909 scope.go:117] "RemoveContainer" containerID="b839c51d24d478c6f7088fd698b41d9b8abf9631f25af2b20b31614e8c759b5d" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.217062 4909 scope.go:117] "RemoveContainer" containerID="cd1005b99a18b17cad8c1d2e3152591e37e1fb5b2ad03cedfc3c842868488ea8" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.249387 4909 scope.go:117] "RemoveContainer" containerID="0e0ab69dd6fadaaf1295f81a6d9010207c24862ab74ebb9d59037e0435c21992" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.310077 4909 scope.go:117] "RemoveContainer" containerID="b589d87e51374dd79f69c4819dca7d38374f8adb25ebf560946dbab0a7dc7461" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.331516 4909 scope.go:117] "RemoveContainer" containerID="25fec5357f4884a3d354306d4daf24db700a915aa9ded27e5750b66f49b50a46" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.349586 4909 scope.go:117] "RemoveContainer" containerID="13767ceb28e0bc3faa7301a3c2f022aac45d597e934be4c4881db240dd43bde0" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.381732 4909 scope.go:117] "RemoveContainer" containerID="da1179331683703744dab835e1520b28f87db03e32e364ebd254b655f44ab702" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.415057 4909 scope.go:117] "RemoveContainer" containerID="2b21f4c34eb3dd787a432ad7f9eb9ad7e33cb5d88aafed3af5d2418fd9fff5b2" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.430672 4909 scope.go:117] "RemoveContainer" containerID="30e361990988bd171e5232954c25f4cf39111c82e9b5e67eeac5bf9805a9c92f" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.453610 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.453669 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.462612 4909 scope.go:117] "RemoveContainer" containerID="23a2d29c37454e96f439b04a2b47d4669d0de0721240be366937127da50d0d42" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.483321 4909 scope.go:117] "RemoveContainer" containerID="cd20b82b6cb255cb8ec632073b8e9990c8b02851b60c6247e0a5fe9e8944b3ee" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.499812 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:26:34 crc kubenswrapper[4909]: E1126 07:26:34.500024 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.501412 4909 scope.go:117] "RemoveContainer" containerID="fb7ba41d1b508a19c8614b06d5b351863aa4f60b8d3c20fada30c698bfaa47b8" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.509129 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.518030 4909 scope.go:117] "RemoveContainer" containerID="83f2fa0df126cd84a93da94a384252310d087fec1c7f6c1abf2c21ba3382de98" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.538319 4909 scope.go:117] "RemoveContainer" containerID="77a55daed39f1df8e0111c410dc163e4c956a76e2dbeb26fb2c46f8a0c83a4c9" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.556736 4909 scope.go:117] "RemoveContainer" containerID="327c87ed3286e637f70f01904430f88a5b31dee163ca3d8ba6a36eb76ab58adb" Nov 26 07:26:34 crc kubenswrapper[4909]: I1126 07:26:34.850391 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:35 crc kubenswrapper[4909]: I1126 07:26:35.643750 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:35 crc kubenswrapper[4909]: I1126 07:26:35.644179 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:35 crc kubenswrapper[4909]: I1126 07:26:35.724778 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:35 crc kubenswrapper[4909]: I1126 07:26:35.881107 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:36 crc kubenswrapper[4909]: I1126 07:26:36.687781 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4wltp"] Nov 26 07:26:37 crc kubenswrapper[4909]: I1126 07:26:37.285623 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7kdz"] Nov 26 07:26:37 crc kubenswrapper[4909]: I1126 07:26:37.285854 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7kdz" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="registry-server" containerID="cri-o://fd7954afebd7febe6c6d19a2108e891f060e0f5109f6d8f500d634eb0eb52c1c" gracePeriod=2 Nov 26 07:26:37 crc kubenswrapper[4909]: I1126 07:26:37.838728 4909 generic.go:334] "Generic (PLEG): container finished" podID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerID="fd7954afebd7febe6c6d19a2108e891f060e0f5109f6d8f500d634eb0eb52c1c" exitCode=0 Nov 26 07:26:37 crc kubenswrapper[4909]: I1126 07:26:37.838796 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7kdz" event={"ID":"620cd68b-58b5-46bf-9389-0e238b55ef9e","Type":"ContainerDied","Data":"fd7954afebd7febe6c6d19a2108e891f060e0f5109f6d8f500d634eb0eb52c1c"} Nov 26 07:26:37 crc kubenswrapper[4909]: I1126 07:26:37.839371 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4wltp" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="registry-server" containerID="cri-o://2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad" gracePeriod=2 Nov 26 07:26:37 crc kubenswrapper[4909]: I1126 07:26:37.889289 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj8n2"] Nov 26 07:26:37 crc kubenswrapper[4909]: I1126 07:26:37.889585 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rj8n2" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="registry-server" containerID="cri-o://3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29" gracePeriod=2 Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.240202 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.266869 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-catalog-content\") pod \"620cd68b-58b5-46bf-9389-0e238b55ef9e\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.266973 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-utilities\") pod \"620cd68b-58b5-46bf-9389-0e238b55ef9e\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.267160 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2zd5\" (UniqueName: \"kubernetes.io/projected/620cd68b-58b5-46bf-9389-0e238b55ef9e-kube-api-access-w2zd5\") pod \"620cd68b-58b5-46bf-9389-0e238b55ef9e\" (UID: \"620cd68b-58b5-46bf-9389-0e238b55ef9e\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.268585 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-utilities" (OuterVolumeSpecName: "utilities") pod "620cd68b-58b5-46bf-9389-0e238b55ef9e" (UID: "620cd68b-58b5-46bf-9389-0e238b55ef9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.279039 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620cd68b-58b5-46bf-9389-0e238b55ef9e-kube-api-access-w2zd5" (OuterVolumeSpecName: "kube-api-access-w2zd5") pod "620cd68b-58b5-46bf-9389-0e238b55ef9e" (UID: "620cd68b-58b5-46bf-9389-0e238b55ef9e"). InnerVolumeSpecName "kube-api-access-w2zd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.322551 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.336992 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "620cd68b-58b5-46bf-9389-0e238b55ef9e" (UID: "620cd68b-58b5-46bf-9389-0e238b55ef9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.369792 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-catalog-content\") pod \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.369887 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-utilities\") pod \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.369981 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gd4q\" (UniqueName: \"kubernetes.io/projected/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-kube-api-access-9gd4q\") pod \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\" (UID: \"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.370267 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2zd5\" (UniqueName: \"kubernetes.io/projected/620cd68b-58b5-46bf-9389-0e238b55ef9e-kube-api-access-w2zd5\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.370278 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.370287 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620cd68b-58b5-46bf-9389-0e238b55ef9e-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.372810 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-utilities" (OuterVolumeSpecName: "utilities") pod "c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" (UID: "c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.374985 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-kube-api-access-9gd4q" (OuterVolumeSpecName: "kube-api-access-9gd4q") pod "c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" (UID: "c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0"). InnerVolumeSpecName "kube-api-access-9gd4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.470236 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" (UID: "c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.471193 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gd4q\" (UniqueName: \"kubernetes.io/projected/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-kube-api-access-9gd4q\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.471224 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.471238 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.485562 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wbqfg"] Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.485813 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wbqfg" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="registry-server" containerID="cri-o://bb50038703c615cd88b8c4b6ebdb732f82baf5ad3a6c25460b78e0cf6ea40846" gracePeriod=2 Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.562880 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.673710 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4499\" (UniqueName: \"kubernetes.io/projected/04fdef56-93e1-4254-89bb-9e27aad42099-kube-api-access-r4499\") pod \"04fdef56-93e1-4254-89bb-9e27aad42099\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.673761 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-catalog-content\") pod \"04fdef56-93e1-4254-89bb-9e27aad42099\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.673858 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-utilities\") pod \"04fdef56-93e1-4254-89bb-9e27aad42099\" (UID: \"04fdef56-93e1-4254-89bb-9e27aad42099\") " Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.675043 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-utilities" (OuterVolumeSpecName: "utilities") pod "04fdef56-93e1-4254-89bb-9e27aad42099" (UID: "04fdef56-93e1-4254-89bb-9e27aad42099"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.691608 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04fdef56-93e1-4254-89bb-9e27aad42099-kube-api-access-r4499" (OuterVolumeSpecName: "kube-api-access-r4499") pod "04fdef56-93e1-4254-89bb-9e27aad42099" (UID: "04fdef56-93e1-4254-89bb-9e27aad42099"). InnerVolumeSpecName "kube-api-access-r4499". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.733666 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04fdef56-93e1-4254-89bb-9e27aad42099" (UID: "04fdef56-93e1-4254-89bb-9e27aad42099"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.775543 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4499\" (UniqueName: \"kubernetes.io/projected/04fdef56-93e1-4254-89bb-9e27aad42099-kube-api-access-r4499\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.775567 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.775577 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fdef56-93e1-4254-89bb-9e27aad42099-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.853246 4909 generic.go:334] "Generic (PLEG): container finished" podID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerID="2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad" exitCode=0 Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.853317 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4wltp" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.853336 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4wltp" event={"ID":"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0","Type":"ContainerDied","Data":"2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad"} Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.853698 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4wltp" event={"ID":"c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0","Type":"ContainerDied","Data":"c295da3747e61969f00be5d23edb8aa304f79c71d204ae67d21eb4279decde4f"} Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.853716 4909 scope.go:117] "RemoveContainer" containerID="2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.858337 4909 generic.go:334] "Generic (PLEG): container finished" podID="04fdef56-93e1-4254-89bb-9e27aad42099" containerID="3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29" exitCode=0 Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.858465 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n2" event={"ID":"04fdef56-93e1-4254-89bb-9e27aad42099","Type":"ContainerDied","Data":"3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29"} Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.858489 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n2" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.858491 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n2" event={"ID":"04fdef56-93e1-4254-89bb-9e27aad42099","Type":"ContainerDied","Data":"33dfc4b4535a55b28ed707d5dfbb402ac96ce135cfc9f12e99b86f2f01eb453b"} Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.862648 4909 generic.go:334] "Generic (PLEG): container finished" podID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerID="bb50038703c615cd88b8c4b6ebdb732f82baf5ad3a6c25460b78e0cf6ea40846" exitCode=0 Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.862669 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbqfg" event={"ID":"1a720e79-f385-40a0-a73c-5298c3b2596b","Type":"ContainerDied","Data":"bb50038703c615cd88b8c4b6ebdb732f82baf5ad3a6c25460b78e0cf6ea40846"} Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.865375 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7kdz" event={"ID":"620cd68b-58b5-46bf-9389-0e238b55ef9e","Type":"ContainerDied","Data":"99e705f0fd620640fb38ce12318248d014842277db9da08edab53edd27b375b6"} Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.865442 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7kdz" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.879447 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4wltp"] Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.887451 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4wltp"] Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.900722 4909 scope.go:117] "RemoveContainer" containerID="8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.904424 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7kdz"] Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.920938 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7kdz"] Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.928041 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj8n2"] Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.928296 4909 scope.go:117] "RemoveContainer" containerID="86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.933082 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rj8n2"] Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.943471 4909 scope.go:117] "RemoveContainer" containerID="2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad" Nov 26 07:26:38 crc kubenswrapper[4909]: E1126 07:26:38.943947 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad\": container with ID starting with 2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad not found: ID does not exist" containerID="2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.944028 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad"} err="failed to get container status \"2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad\": rpc error: code = NotFound desc = could not find container \"2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad\": container with ID starting with 2b8fd27b0a4ce4868dc646f4e77c9b667a6b3b1322b697bca5762f5a05ec9cad not found: ID does not exist" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.944109 4909 scope.go:117] "RemoveContainer" containerID="8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e" Nov 26 07:26:38 crc kubenswrapper[4909]: E1126 07:26:38.944453 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e\": container with ID starting with 8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e not found: ID does not exist" containerID="8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.944489 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e"} err="failed to get container status \"8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e\": rpc error: code = NotFound desc = could not find container \"8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e\": container with ID starting with 8f1ae686cde0d475c184f26b18bbc94177fb1b66a749255c22b4a983dfaeac9e not found: ID does not exist" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.944514 4909 scope.go:117] "RemoveContainer" containerID="86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db" Nov 26 07:26:38 crc kubenswrapper[4909]: E1126 07:26:38.944846 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db\": container with ID starting with 86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db not found: ID does not exist" containerID="86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.944864 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db"} err="failed to get container status \"86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db\": rpc error: code = NotFound desc = could not find container \"86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db\": container with ID starting with 86b20c744aac93c282eac914e5198996a92f5640af72bda2bfa7fa50f6c1b5db not found: ID does not exist" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.944912 4909 scope.go:117] "RemoveContainer" containerID="3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.972711 4909 scope.go:117] "RemoveContainer" containerID="dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.974904 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:38 crc kubenswrapper[4909]: I1126 07:26:38.992357 4909 scope.go:117] "RemoveContainer" containerID="9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.025830 4909 scope.go:117] "RemoveContainer" containerID="3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29" Nov 26 07:26:39 crc kubenswrapper[4909]: E1126 07:26:39.026391 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29\": container with ID starting with 3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29 not found: ID does not exist" containerID="3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.026497 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29"} err="failed to get container status \"3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29\": rpc error: code = NotFound desc = could not find container \"3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29\": container with ID starting with 3ad962d2b4fc378171bf8a31ab3232855e0a2d076aa42a5fddaec4677ee8bd29 not found: ID does not exist" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.026570 4909 scope.go:117] "RemoveContainer" containerID="dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2" Nov 26 07:26:39 crc kubenswrapper[4909]: E1126 07:26:39.026847 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2\": container with ID starting with dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2 not found: ID does not exist" containerID="dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.027000 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2"} err="failed to get container status \"dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2\": rpc error: code = NotFound desc = could not find container \"dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2\": container with ID starting with dbfcf571f94a8d1b1d10e9c41267543262833b3c4bcc3fed9f0cc49df3a681f2 not found: ID does not exist" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.027067 4909 scope.go:117] "RemoveContainer" containerID="9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90" Nov 26 07:26:39 crc kubenswrapper[4909]: E1126 07:26:39.027378 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90\": container with ID starting with 9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90 not found: ID does not exist" containerID="9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.027457 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90"} err="failed to get container status \"9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90\": rpc error: code = NotFound desc = could not find container \"9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90\": container with ID starting with 9f41d2e4f9abd34e0d361634cf91b3d7ccae3b99ee9d2a8b8e9c00a8b6734e90 not found: ID does not exist" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.027526 4909 scope.go:117] "RemoveContainer" containerID="fd7954afebd7febe6c6d19a2108e891f060e0f5109f6d8f500d634eb0eb52c1c" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.057556 4909 scope.go:117] "RemoveContainer" containerID="ffc432400995308c705e0091716065f417991434adc602473a63c77b1cb09986" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.075364 4909 scope.go:117] "RemoveContainer" containerID="950917fadeb2594ad61447a4a219d0b5925541605ddf11b6afd54d7f8ace04b0" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.079852 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-utilities\") pod \"1a720e79-f385-40a0-a73c-5298c3b2596b\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.080003 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lff22\" (UniqueName: \"kubernetes.io/projected/1a720e79-f385-40a0-a73c-5298c3b2596b-kube-api-access-lff22\") pod \"1a720e79-f385-40a0-a73c-5298c3b2596b\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.080120 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-catalog-content\") pod \"1a720e79-f385-40a0-a73c-5298c3b2596b\" (UID: \"1a720e79-f385-40a0-a73c-5298c3b2596b\") " Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.081254 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-utilities" (OuterVolumeSpecName: "utilities") pod "1a720e79-f385-40a0-a73c-5298c3b2596b" (UID: "1a720e79-f385-40a0-a73c-5298c3b2596b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.083748 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a720e79-f385-40a0-a73c-5298c3b2596b-kube-api-access-lff22" (OuterVolumeSpecName: "kube-api-access-lff22") pod "1a720e79-f385-40a0-a73c-5298c3b2596b" (UID: "1a720e79-f385-40a0-a73c-5298c3b2596b"). InnerVolumeSpecName "kube-api-access-lff22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.088497 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4c5r4"] Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.088742 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4c5r4" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="registry-server" containerID="cri-o://a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a" gracePeriod=2 Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.135129 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a720e79-f385-40a0-a73c-5298c3b2596b" (UID: "1a720e79-f385-40a0-a73c-5298c3b2596b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.181930 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.181964 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lff22\" (UniqueName: \"kubernetes.io/projected/1a720e79-f385-40a0-a73c-5298c3b2596b-kube-api-access-lff22\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.181975 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a720e79-f385-40a0-a73c-5298c3b2596b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.544010 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.599020 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxbt\" (UniqueName: \"kubernetes.io/projected/b19d0268-905b-486f-835f-4b1d3d293940-kube-api-access-tvxbt\") pod \"b19d0268-905b-486f-835f-4b1d3d293940\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.599103 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-catalog-content\") pod \"b19d0268-905b-486f-835f-4b1d3d293940\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.599130 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-utilities\") pod \"b19d0268-905b-486f-835f-4b1d3d293940\" (UID: \"b19d0268-905b-486f-835f-4b1d3d293940\") " Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.600792 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-utilities" (OuterVolumeSpecName: "utilities") pod "b19d0268-905b-486f-835f-4b1d3d293940" (UID: "b19d0268-905b-486f-835f-4b1d3d293940"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.612052 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19d0268-905b-486f-835f-4b1d3d293940-kube-api-access-tvxbt" (OuterVolumeSpecName: "kube-api-access-tvxbt") pod "b19d0268-905b-486f-835f-4b1d3d293940" (UID: "b19d0268-905b-486f-835f-4b1d3d293940"). InnerVolumeSpecName "kube-api-access-tvxbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.673887 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b19d0268-905b-486f-835f-4b1d3d293940" (UID: "b19d0268-905b-486f-835f-4b1d3d293940"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.686126 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9cttw"] Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.686379 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9cttw" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="registry-server" containerID="cri-o://4ffbaf3d35fc0c0ff83d07319706c50d46035cb35c3379e73d03363beecb8099" gracePeriod=2 Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.700927 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvxbt\" (UniqueName: \"kubernetes.io/projected/b19d0268-905b-486f-835f-4b1d3d293940-kube-api-access-tvxbt\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.700987 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.701000 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19d0268-905b-486f-835f-4b1d3d293940-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.875370 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbqfg" event={"ID":"1a720e79-f385-40a0-a73c-5298c3b2596b","Type":"ContainerDied","Data":"882d1e22fae694ae481ba6d7b94eed55c4b40ad8361b46bf4674127c5f85ef8f"} Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.875416 4909 scope.go:117] "RemoveContainer" containerID="bb50038703c615cd88b8c4b6ebdb732f82baf5ad3a6c25460b78e0cf6ea40846" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.875509 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbqfg" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.882569 4909 generic.go:334] "Generic (PLEG): container finished" podID="b19d0268-905b-486f-835f-4b1d3d293940" containerID="a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a" exitCode=0 Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.882656 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c5r4" event={"ID":"b19d0268-905b-486f-835f-4b1d3d293940","Type":"ContainerDied","Data":"a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a"} Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.882678 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c5r4" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.882686 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c5r4" event={"ID":"b19d0268-905b-486f-835f-4b1d3d293940","Type":"ContainerDied","Data":"3572a7b2bce33b8ed1327e34e7d5f1f8e19d311370dd85ece14fc05620d3b431"} Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.885048 4909 generic.go:334] "Generic (PLEG): container finished" podID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerID="4ffbaf3d35fc0c0ff83d07319706c50d46035cb35c3379e73d03363beecb8099" exitCode=0 Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.885093 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cttw" event={"ID":"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36","Type":"ContainerDied","Data":"4ffbaf3d35fc0c0ff83d07319706c50d46035cb35c3379e73d03363beecb8099"} Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.901028 4909 scope.go:117] "RemoveContainer" containerID="16c6710dc2e2ccf8fb4069d4dba5aa20ff97cbe2c23b0fd89f10c19a16d4299b" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.919311 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wbqfg"] Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.932263 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wbqfg"] Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.933758 4909 scope.go:117] "RemoveContainer" containerID="2113241a4535dd64013c547d44e7939afd7aac06133fd6ae12d9baab30eebffb" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.937617 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4c5r4"] Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.944472 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4c5r4"] Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.951044 4909 scope.go:117] "RemoveContainer" containerID="a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.967221 4909 scope.go:117] "RemoveContainer" containerID="b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e" Nov 26 07:26:39 crc kubenswrapper[4909]: I1126 07:26:39.989796 4909 scope.go:117] "RemoveContainer" containerID="36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.007799 4909 scope.go:117] "RemoveContainer" containerID="a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a" Nov 26 07:26:40 crc kubenswrapper[4909]: E1126 07:26:40.008217 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a\": container with ID starting with a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a not found: ID does not exist" containerID="a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.008261 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a"} err="failed to get container status \"a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a\": rpc error: code = NotFound desc = could not find container \"a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a\": container with ID starting with a89d9ef8eb0f978a9a8f2a86b1107425163d0288c064586adfd5866d417e2f4a not found: ID does not exist" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.008286 4909 scope.go:117] "RemoveContainer" containerID="b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e" Nov 26 07:26:40 crc kubenswrapper[4909]: E1126 07:26:40.008957 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e\": container with ID starting with b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e not found: ID does not exist" containerID="b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.008976 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e"} err="failed to get container status \"b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e\": rpc error: code = NotFound desc = could not find container \"b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e\": container with ID starting with b7cf2a61dfd2b725511bf3bae5b6c8032e3060cec94e3e7f099bf7f74017c72e not found: ID does not exist" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.008989 4909 scope.go:117] "RemoveContainer" containerID="36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5" Nov 26 07:26:40 crc kubenswrapper[4909]: E1126 07:26:40.009330 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5\": container with ID starting with 36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5 not found: ID does not exist" containerID="36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.009385 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5"} err="failed to get container status \"36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5\": rpc error: code = NotFound desc = could not find container \"36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5\": container with ID starting with 36c627f4a900cb0e071fc4e481818d32d1a2bea230d4be80a251332e3360e3c5 not found: ID does not exist" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.089134 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.210025 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-kube-api-access-7xlfb\") pod \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.210112 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-utilities\") pod \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.210272 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-catalog-content\") pod \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\" (UID: \"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36\") " Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.211322 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-utilities" (OuterVolumeSpecName: "utilities") pod "41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" (UID: "41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.215202 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-kube-api-access-7xlfb" (OuterVolumeSpecName: "kube-api-access-7xlfb") pod "41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" (UID: "41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36"). InnerVolumeSpecName "kube-api-access-7xlfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.270863 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" (UID: "41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.288207 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c7trt"] Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.288626 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c7trt" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="registry-server" containerID="cri-o://29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b" gracePeriod=2 Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.314457 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.314501 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-kube-api-access-7xlfb\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.314518 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.507170 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" path="/var/lib/kubelet/pods/04fdef56-93e1-4254-89bb-9e27aad42099/volumes" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.507830 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" path="/var/lib/kubelet/pods/1a720e79-f385-40a0-a73c-5298c3b2596b/volumes" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.508453 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" path="/var/lib/kubelet/pods/620cd68b-58b5-46bf-9389-0e238b55ef9e/volumes" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.509466 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19d0268-905b-486f-835f-4b1d3d293940" path="/var/lib/kubelet/pods/b19d0268-905b-486f-835f-4b1d3d293940/volumes" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.510031 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" path="/var/lib/kubelet/pods/c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0/volumes" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.686025 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.720188 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-utilities\") pod \"05b80a4e-fb4d-453e-a4df-f583987f8533\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.720255 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-catalog-content\") pod \"05b80a4e-fb4d-453e-a4df-f583987f8533\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.720365 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5zh4\" (UniqueName: \"kubernetes.io/projected/05b80a4e-fb4d-453e-a4df-f583987f8533-kube-api-access-l5zh4\") pod \"05b80a4e-fb4d-453e-a4df-f583987f8533\" (UID: \"05b80a4e-fb4d-453e-a4df-f583987f8533\") " Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.722938 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-utilities" (OuterVolumeSpecName: "utilities") pod "05b80a4e-fb4d-453e-a4df-f583987f8533" (UID: "05b80a4e-fb4d-453e-a4df-f583987f8533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.723873 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b80a4e-fb4d-453e-a4df-f583987f8533-kube-api-access-l5zh4" (OuterVolumeSpecName: "kube-api-access-l5zh4") pod "05b80a4e-fb4d-453e-a4df-f583987f8533" (UID: "05b80a4e-fb4d-453e-a4df-f583987f8533"). InnerVolumeSpecName "kube-api-access-l5zh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.772975 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05b80a4e-fb4d-453e-a4df-f583987f8533" (UID: "05b80a4e-fb4d-453e-a4df-f583987f8533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.822039 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5zh4\" (UniqueName: \"kubernetes.io/projected/05b80a4e-fb4d-453e-a4df-f583987f8533-kube-api-access-l5zh4\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.822243 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.822300 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b80a4e-fb4d-453e-a4df-f583987f8533-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.884235 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khn47"] Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.884477 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khn47" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="registry-server" containerID="cri-o://a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89" gracePeriod=2 Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.896440 4909 generic.go:334] "Generic (PLEG): container finished" podID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerID="29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b" exitCode=0 Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.896513 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7trt" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.896539 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7trt" event={"ID":"05b80a4e-fb4d-453e-a4df-f583987f8533","Type":"ContainerDied","Data":"29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b"} Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.896575 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7trt" event={"ID":"05b80a4e-fb4d-453e-a4df-f583987f8533","Type":"ContainerDied","Data":"e489e2e139c505afc184d746f1fbfaba12cae84005be53d477663a932ce74920"} Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.896619 4909 scope.go:117] "RemoveContainer" containerID="29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.899527 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cttw" event={"ID":"41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36","Type":"ContainerDied","Data":"add0beea53294ab058efc433bf31ae7b07490b393552df0babedc8489a209b3f"} Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.899714 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cttw" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.922636 4909 scope.go:117] "RemoveContainer" containerID="584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.935388 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9cttw"] Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.941719 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9cttw"] Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.944355 4909 scope.go:117] "RemoveContainer" containerID="541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.948579 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c7trt"] Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.955939 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c7trt"] Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.964689 4909 scope.go:117] "RemoveContainer" containerID="29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b" Nov 26 07:26:40 crc kubenswrapper[4909]: E1126 07:26:40.965192 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b\": container with ID starting with 29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b not found: ID does not exist" containerID="29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.965245 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b"} err="failed to get container status \"29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b\": rpc error: code = NotFound desc = could not find container \"29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b\": container with ID starting with 29f4a821649919b59f3fa3f6926f86905de90bfee8ffb91544f5d230775db40b not found: ID does not exist" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.965279 4909 scope.go:117] "RemoveContainer" containerID="584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6" Nov 26 07:26:40 crc kubenswrapper[4909]: E1126 07:26:40.965640 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6\": container with ID starting with 584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6 not found: ID does not exist" containerID="584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.965669 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6"} err="failed to get container status \"584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6\": rpc error: code = NotFound desc = could not find container \"584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6\": container with ID starting with 584363d64e7a32c2ba71608fa2b9dfd1ee7dfcc966a13fa8db5c8da474e9b2d6 not found: ID does not exist" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.965693 4909 scope.go:117] "RemoveContainer" containerID="541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba" Nov 26 07:26:40 crc kubenswrapper[4909]: E1126 07:26:40.966086 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba\": container with ID starting with 541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba not found: ID does not exist" containerID="541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.966123 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba"} err="failed to get container status \"541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba\": rpc error: code = NotFound desc = could not find container \"541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba\": container with ID starting with 541b682545f88aebc15d8f74b5385d530fd10bd276c5e76066472b2e3de285ba not found: ID does not exist" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.966149 4909 scope.go:117] "RemoveContainer" containerID="4ffbaf3d35fc0c0ff83d07319706c50d46035cb35c3379e73d03363beecb8099" Nov 26 07:26:40 crc kubenswrapper[4909]: I1126 07:26:40.986091 4909 scope.go:117] "RemoveContainer" containerID="2b6adc139cf8f5a96deb43c1038347c52daaa73394385d4175ead2d328f25932" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.061606 4909 scope.go:117] "RemoveContainer" containerID="27c4b71f1b9b7214b9e115e4770ae54c79af478bb85e4b6bed6d4f7ad7f2be8c" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.341011 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.434276 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-utilities\") pod \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.435290 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-utilities" (OuterVolumeSpecName: "utilities") pod "e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" (UID: "e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.435502 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-catalog-content\") pod \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.435567 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qh5q\" (UniqueName: \"kubernetes.io/projected/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-kube-api-access-9qh5q\") pod \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\" (UID: \"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7\") " Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.436126 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.441648 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-kube-api-access-9qh5q" (OuterVolumeSpecName: "kube-api-access-9qh5q") pod "e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" (UID: "e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7"). InnerVolumeSpecName "kube-api-access-9qh5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.486373 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmqrq"] Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.486610 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mmqrq" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="registry-server" containerID="cri-o://c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a" gracePeriod=2 Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.496334 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" (UID: "e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.537274 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.537997 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qh5q\" (UniqueName: \"kubernetes.io/projected/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7-kube-api-access-9qh5q\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.858852 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.910060 4909 generic.go:334] "Generic (PLEG): container finished" podID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerID="a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89" exitCode=0 Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.910147 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khn47" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.910209 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khn47" event={"ID":"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7","Type":"ContainerDied","Data":"a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89"} Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.910249 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khn47" event={"ID":"e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7","Type":"ContainerDied","Data":"463d49c35ab931b1d8059331b72fce9ddebd0c131bbbfee5ba18d9ba16b3f9b8"} Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.910273 4909 scope.go:117] "RemoveContainer" containerID="a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.913813 4909 generic.go:334] "Generic (PLEG): container finished" podID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerID="c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a" exitCode=0 Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.913881 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmqrq" event={"ID":"b4d456cd-a10d-4a92-a7b2-ab6269f7297d","Type":"ContainerDied","Data":"c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a"} Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.913916 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmqrq" event={"ID":"b4d456cd-a10d-4a92-a7b2-ab6269f7297d","Type":"ContainerDied","Data":"d00252ca95f1371613e9d2dad1f13f0c7c0bb77fb3486b16ac7abd926300561d"} Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.913891 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmqrq" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.939376 4909 scope.go:117] "RemoveContainer" containerID="62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.942654 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-utilities\") pod \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.943585 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-utilities" (OuterVolumeSpecName: "utilities") pod "b4d456cd-a10d-4a92-a7b2-ab6269f7297d" (UID: "b4d456cd-a10d-4a92-a7b2-ab6269f7297d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.943821 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-catalog-content\") pod \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.946089 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glxr4\" (UniqueName: \"kubernetes.io/projected/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-kube-api-access-glxr4\") pod \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\" (UID: \"b4d456cd-a10d-4a92-a7b2-ab6269f7297d\") " Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.946404 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.946539 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khn47"] Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.951415 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-kube-api-access-glxr4" (OuterVolumeSpecName: "kube-api-access-glxr4") pod "b4d456cd-a10d-4a92-a7b2-ab6269f7297d" (UID: "b4d456cd-a10d-4a92-a7b2-ab6269f7297d"). InnerVolumeSpecName "kube-api-access-glxr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.954734 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khn47"] Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.967917 4909 scope.go:117] "RemoveContainer" containerID="68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.988570 4909 scope.go:117] "RemoveContainer" containerID="a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89" Nov 26 07:26:41 crc kubenswrapper[4909]: E1126 07:26:41.989270 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89\": container with ID starting with a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89 not found: ID does not exist" containerID="a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.989323 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89"} err="failed to get container status \"a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89\": rpc error: code = NotFound desc = could not find container \"a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89\": container with ID starting with a56bf9220ef179f448482738ca2bb34377c046786cf40060800be27992429b89 not found: ID does not exist" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.989354 4909 scope.go:117] "RemoveContainer" containerID="62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a" Nov 26 07:26:41 crc kubenswrapper[4909]: E1126 07:26:41.989891 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a\": container with ID starting with 62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a not found: ID does not exist" containerID="62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.989921 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a"} err="failed to get container status \"62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a\": rpc error: code = NotFound desc = could not find container \"62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a\": container with ID starting with 62e5a4e0720904ecbb3393ce5ceb87133a3aef8d221df398268069cdfb82300a not found: ID does not exist" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.989943 4909 scope.go:117] "RemoveContainer" containerID="68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a" Nov 26 07:26:41 crc kubenswrapper[4909]: E1126 07:26:41.990383 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a\": container with ID starting with 68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a not found: ID does not exist" containerID="68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.990423 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a"} err="failed to get container status \"68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a\": rpc error: code = NotFound desc = could not find container \"68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a\": container with ID starting with 68a6b7d902f8cfe4747cad12ad373928ddd98b92c56159ffe04863b05c6c4e8a not found: ID does not exist" Nov 26 07:26:41 crc kubenswrapper[4909]: I1126 07:26:41.990448 4909 scope.go:117] "RemoveContainer" containerID="c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.001252 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4d456cd-a10d-4a92-a7b2-ab6269f7297d" (UID: "b4d456cd-a10d-4a92-a7b2-ab6269f7297d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.008714 4909 scope.go:117] "RemoveContainer" containerID="550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.031720 4909 scope.go:117] "RemoveContainer" containerID="19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.048526 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.048806 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glxr4\" (UniqueName: \"kubernetes.io/projected/b4d456cd-a10d-4a92-a7b2-ab6269f7297d-kube-api-access-glxr4\") on node \"crc\" DevicePath \"\"" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.049989 4909 scope.go:117] "RemoveContainer" containerID="c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a" Nov 26 07:26:42 crc kubenswrapper[4909]: E1126 07:26:42.050528 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a\": container with ID starting with c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a not found: ID does not exist" containerID="c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.050579 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a"} err="failed to get container status \"c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a\": rpc error: code = NotFound desc = could not find container \"c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a\": container with ID starting with c647e3861f9dcd02f2291180cfe592498408b83335fc2f894fcc1e0b7eaaa73a not found: ID does not exist" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.050633 4909 scope.go:117] "RemoveContainer" containerID="550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944" Nov 26 07:26:42 crc kubenswrapper[4909]: E1126 07:26:42.051006 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944\": container with ID starting with 550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944 not found: ID does not exist" containerID="550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.051034 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944"} err="failed to get container status \"550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944\": rpc error: code = NotFound desc = could not find container \"550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944\": container with ID starting with 550947f2418b3844836af8b895558a3af2ae649acc5f409e7f886ea704577944 not found: ID does not exist" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.051050 4909 scope.go:117] "RemoveContainer" containerID="19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221" Nov 26 07:26:42 crc kubenswrapper[4909]: E1126 07:26:42.051381 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221\": container with ID starting with 19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221 not found: ID does not exist" containerID="19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.051544 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221"} err="failed to get container status \"19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221\": rpc error: code = NotFound desc = could not find container \"19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221\": container with ID starting with 19da9c97a921700432c8413a2efb40c1325dea44be507e4428eb6e8383a1a221 not found: ID does not exist" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.249584 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmqrq"] Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.255041 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mmqrq"] Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.507385 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" path="/var/lib/kubelet/pods/05b80a4e-fb4d-453e-a4df-f583987f8533/volumes" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.508470 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" path="/var/lib/kubelet/pods/41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36/volumes" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.509181 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" path="/var/lib/kubelet/pods/b4d456cd-a10d-4a92-a7b2-ab6269f7297d/volumes" Nov 26 07:26:42 crc kubenswrapper[4909]: I1126 07:26:42.510304 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" path="/var/lib/kubelet/pods/e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7/volumes" Nov 26 07:26:45 crc kubenswrapper[4909]: I1126 07:26:45.498560 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:26:45 crc kubenswrapper[4909]: E1126 07:26:45.498938 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:26:58 crc kubenswrapper[4909]: I1126 07:26:58.503720 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:26:58 crc kubenswrapper[4909]: E1126 07:26:58.504848 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:27:10 crc kubenswrapper[4909]: I1126 07:27:10.499528 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:27:10 crc kubenswrapper[4909]: E1126 07:27:10.500863 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:27:22 crc kubenswrapper[4909]: I1126 07:27:22.498875 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:27:22 crc kubenswrapper[4909]: E1126 07:27:22.500105 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:27:33 crc kubenswrapper[4909]: I1126 07:27:33.499729 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:27:33 crc kubenswrapper[4909]: E1126 07:27:33.500800 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:27:34 crc kubenswrapper[4909]: I1126 07:27:34.758284 4909 scope.go:117] "RemoveContainer" containerID="4add6d1b0117e9447b11bea0e6b55ba54c459b24effc28c5fb858469b66444e8" Nov 26 07:27:34 crc kubenswrapper[4909]: I1126 07:27:34.829894 4909 scope.go:117] "RemoveContainer" containerID="12ac34164d81708ee6454066bd4299d249cc968383b7ec0cb8044233d2effe68" Nov 26 07:27:34 crc kubenswrapper[4909]: I1126 07:27:34.852997 4909 scope.go:117] "RemoveContainer" containerID="2a498b57189a43f970e29d0e8040bbe8756423b8463ab0d366df0dad3c6b6fa0" Nov 26 07:27:34 crc kubenswrapper[4909]: I1126 07:27:34.872823 4909 scope.go:117] "RemoveContainer" containerID="51f26cf80d8da853fb9da8dc0fafd164d9ce6124c41fe4293c3704ab70a1c633" Nov 26 07:27:34 crc kubenswrapper[4909]: I1126 07:27:34.892576 4909 scope.go:117] "RemoveContainer" containerID="c340f71bc1c6c25a928c8f228589f2df2d196a7a03b5b16c0c1846e902a918ac" Nov 26 07:27:34 crc kubenswrapper[4909]: I1126 07:27:34.944100 4909 scope.go:117] "RemoveContainer" containerID="fc4e1d11210ea94d5e20c39c83a322bf0f7dc51504c8b4db99b77d2610531017" Nov 26 07:27:34 crc kubenswrapper[4909]: I1126 07:27:34.964686 4909 scope.go:117] "RemoveContainer" containerID="116eab90b476caef43b56d5a61af14c4c7625f3f13c935ccae4c8e2d51b7d92e" Nov 26 07:27:35 crc kubenswrapper[4909]: I1126 07:27:35.010946 4909 scope.go:117] "RemoveContainer" containerID="d849337821d8217076eb9a9d55645f97c144b965bd6ef5def3a986ec27b0c502" Nov 26 07:27:35 crc kubenswrapper[4909]: I1126 07:27:35.038330 4909 scope.go:117] "RemoveContainer" containerID="043f2d799dc68a009facf1b7538ebe9df2d186af807ec3bf0beb95026499894c" Nov 26 07:27:48 crc kubenswrapper[4909]: I1126 07:27:48.505413 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:27:48 crc kubenswrapper[4909]: E1126 07:27:48.508517 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:27:59 crc kubenswrapper[4909]: I1126 07:27:59.498774 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:27:59 crc kubenswrapper[4909]: E1126 07:27:59.501906 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:28:11 crc kubenswrapper[4909]: I1126 07:28:11.499759 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:28:11 crc kubenswrapper[4909]: E1126 07:28:11.501140 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:28:24 crc kubenswrapper[4909]: I1126 07:28:24.499081 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:28:24 crc kubenswrapper[4909]: E1126 07:28:24.500091 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:28:35 crc kubenswrapper[4909]: I1126 07:28:35.269452 4909 scope.go:117] "RemoveContainer" containerID="ef7a513521775234881fd5ee1e7482c3b02487c963f7979e7ddc36cab4590a3e" Nov 26 07:28:35 crc kubenswrapper[4909]: I1126 07:28:35.305196 4909 scope.go:117] "RemoveContainer" containerID="04913ab9e915ad52de6004fca29426d07d456a81c47c21c9ae2f92f19e8bde70" Nov 26 07:28:35 crc kubenswrapper[4909]: I1126 07:28:35.343347 4909 scope.go:117] "RemoveContainer" containerID="ccfd3d7b7cf51112b6da0749f82a4e2c74a5a1cb50d253b689f9325bce61d9fd" Nov 26 07:28:35 crc kubenswrapper[4909]: I1126 07:28:35.499608 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:28:35 crc kubenswrapper[4909]: E1126 07:28:35.499865 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:28:47 crc kubenswrapper[4909]: I1126 07:28:47.499696 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:28:47 crc kubenswrapper[4909]: E1126 07:28:47.500746 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:29:01 crc kubenswrapper[4909]: I1126 07:29:01.499305 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:29:01 crc kubenswrapper[4909]: E1126 07:29:01.500471 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:29:13 crc kubenswrapper[4909]: I1126 07:29:13.499408 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:29:13 crc kubenswrapper[4909]: E1126 07:29:13.500151 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:29:25 crc kubenswrapper[4909]: I1126 07:29:25.498568 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:29:25 crc kubenswrapper[4909]: E1126 07:29:25.499356 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:29:37 crc kubenswrapper[4909]: I1126 07:29:37.499784 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:29:37 crc kubenswrapper[4909]: E1126 07:29:37.500950 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:29:49 crc kubenswrapper[4909]: I1126 07:29:49.498766 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:29:49 crc kubenswrapper[4909]: E1126 07:29:49.501068 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.177381 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx"] Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178479 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178503 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178518 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178530 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178548 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178561 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178587 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178631 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178650 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178661 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178684 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178695 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178716 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178725 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178739 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178776 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178796 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178806 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178827 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178837 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178854 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178863 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178879 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178888 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178909 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178919 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178939 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178948 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178964 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178975 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.178986 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.178996 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179011 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179020 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179041 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179051 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179065 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179075 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179095 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179104 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179118 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179129 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179145 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179154 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179167 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179177 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179191 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179201 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179217 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179226 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179241 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179250 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="extract-content" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179267 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179278 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179292 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179301 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179312 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179323 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: E1126 07:30:00.179344 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179354 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="extract-utilities" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179717 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02c9c50-c2fd-4e2a-90c6-38c0a46ec7d0" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179744 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7827358f-2d3b-47de-9f4e-80e0fbd67758" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179765 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="04fdef56-93e1-4254-89bb-9e27aad42099" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179781 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="620cd68b-58b5-46bf-9389-0e238b55ef9e" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179797 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a720e79-f385-40a0-a73c-5298c3b2596b" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179817 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8dc5e21-d9a1-4d94-afbe-b1e71b8b27a7" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179839 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19d0268-905b-486f-835f-4b1d3d293940" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179862 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a0b47d-3fff-4abd-b7cb-f23c6f3e0f36" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179884 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b80a4e-fb4d-453e-a4df-f583987f8533" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.179911 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d456cd-a10d-4a92-a7b2-ab6269f7297d" containerName="registry-server" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.180634 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.182499 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.189177 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.192155 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx"] Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.215711 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgmc\" (UniqueName: \"kubernetes.io/projected/da8482ff-880f-453b-bc38-5578ee3fad7f-kube-api-access-xlgmc\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.215780 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8482ff-880f-453b-bc38-5578ee3fad7f-secret-volume\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.215850 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8482ff-880f-453b-bc38-5578ee3fad7f-config-volume\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.317519 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8482ff-880f-453b-bc38-5578ee3fad7f-secret-volume\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.317619 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8482ff-880f-453b-bc38-5578ee3fad7f-config-volume\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.317672 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlgmc\" (UniqueName: \"kubernetes.io/projected/da8482ff-880f-453b-bc38-5578ee3fad7f-kube-api-access-xlgmc\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.319073 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8482ff-880f-453b-bc38-5578ee3fad7f-config-volume\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.325271 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8482ff-880f-453b-bc38-5578ee3fad7f-secret-volume\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.335707 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlgmc\" (UniqueName: \"kubernetes.io/projected/da8482ff-880f-453b-bc38-5578ee3fad7f-kube-api-access-xlgmc\") pod \"collect-profiles-29402370-xqnvx\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.503801 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:00 crc kubenswrapper[4909]: I1126 07:30:00.963512 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx"] Nov 26 07:30:01 crc kubenswrapper[4909]: I1126 07:30:01.794784 4909 generic.go:334] "Generic (PLEG): container finished" podID="da8482ff-880f-453b-bc38-5578ee3fad7f" containerID="3e1a9729b8c31d1cb51530eb594a1e4c36685930196775afd19f0e82cfd9341e" exitCode=0 Nov 26 07:30:01 crc kubenswrapper[4909]: I1126 07:30:01.794884 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" event={"ID":"da8482ff-880f-453b-bc38-5578ee3fad7f","Type":"ContainerDied","Data":"3e1a9729b8c31d1cb51530eb594a1e4c36685930196775afd19f0e82cfd9341e"} Nov 26 07:30:01 crc kubenswrapper[4909]: I1126 07:30:01.795067 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" event={"ID":"da8482ff-880f-453b-bc38-5578ee3fad7f","Type":"ContainerStarted","Data":"95e1ae6a1c2ccd130e5d498482b204120fae36d4d6f46f89b4e57b533375fe8e"} Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.080134 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.166564 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlgmc\" (UniqueName: \"kubernetes.io/projected/da8482ff-880f-453b-bc38-5578ee3fad7f-kube-api-access-xlgmc\") pod \"da8482ff-880f-453b-bc38-5578ee3fad7f\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.166750 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8482ff-880f-453b-bc38-5578ee3fad7f-secret-volume\") pod \"da8482ff-880f-453b-bc38-5578ee3fad7f\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.167062 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8482ff-880f-453b-bc38-5578ee3fad7f-config-volume\") pod \"da8482ff-880f-453b-bc38-5578ee3fad7f\" (UID: \"da8482ff-880f-453b-bc38-5578ee3fad7f\") " Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.167783 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8482ff-880f-453b-bc38-5578ee3fad7f-config-volume" (OuterVolumeSpecName: "config-volume") pod "da8482ff-880f-453b-bc38-5578ee3fad7f" (UID: "da8482ff-880f-453b-bc38-5578ee3fad7f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.172031 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8482ff-880f-453b-bc38-5578ee3fad7f-kube-api-access-xlgmc" (OuterVolumeSpecName: "kube-api-access-xlgmc") pod "da8482ff-880f-453b-bc38-5578ee3fad7f" (UID: "da8482ff-880f-453b-bc38-5578ee3fad7f"). InnerVolumeSpecName "kube-api-access-xlgmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.173915 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8482ff-880f-453b-bc38-5578ee3fad7f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da8482ff-880f-453b-bc38-5578ee3fad7f" (UID: "da8482ff-880f-453b-bc38-5578ee3fad7f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.268687 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8482ff-880f-453b-bc38-5578ee3fad7f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.269008 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlgmc\" (UniqueName: \"kubernetes.io/projected/da8482ff-880f-453b-bc38-5578ee3fad7f-kube-api-access-xlgmc\") on node \"crc\" DevicePath \"\"" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.269020 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da8482ff-880f-453b-bc38-5578ee3fad7f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.499209 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:30:03 crc kubenswrapper[4909]: E1126 07:30:03.500007 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.811697 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" event={"ID":"da8482ff-880f-453b-bc38-5578ee3fad7f","Type":"ContainerDied","Data":"95e1ae6a1c2ccd130e5d498482b204120fae36d4d6f46f89b4e57b533375fe8e"} Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.811740 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e1ae6a1c2ccd130e5d498482b204120fae36d4d6f46f89b4e57b533375fe8e" Nov 26 07:30:03 crc kubenswrapper[4909]: I1126 07:30:03.812097 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx" Nov 26 07:30:17 crc kubenswrapper[4909]: I1126 07:30:17.498943 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:30:17 crc kubenswrapper[4909]: E1126 07:30:17.499631 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:30:31 crc kubenswrapper[4909]: I1126 07:30:31.499180 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:30:31 crc kubenswrapper[4909]: E1126 07:30:31.499812 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:30:43 crc kubenswrapper[4909]: I1126 07:30:43.499861 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:30:43 crc kubenswrapper[4909]: E1126 07:30:43.501057 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:30:57 crc kubenswrapper[4909]: I1126 07:30:57.499494 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:30:57 crc kubenswrapper[4909]: E1126 07:30:57.500649 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:31:10 crc kubenswrapper[4909]: I1126 07:31:10.499416 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:31:11 crc kubenswrapper[4909]: I1126 07:31:11.395988 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"682c9fe04226d542318eba43fef0a14ef494d1e8654a34321acd7471f2fa933e"} Nov 26 07:33:37 crc kubenswrapper[4909]: I1126 07:33:37.301708 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:33:37 crc kubenswrapper[4909]: I1126 07:33:37.302188 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.266916 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mxcsv"] Nov 26 07:33:58 crc kubenswrapper[4909]: E1126 07:33:58.268638 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8482ff-880f-453b-bc38-5578ee3fad7f" containerName="collect-profiles" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.268724 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8482ff-880f-453b-bc38-5578ee3fad7f" containerName="collect-profiles" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.268941 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8482ff-880f-453b-bc38-5578ee3fad7f" containerName="collect-profiles" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.269983 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.295299 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxcsv"] Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.424051 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-utilities\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.424139 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qzm\" (UniqueName: \"kubernetes.io/projected/6d874618-56f7-4b5a-a882-3631cb6ab5ee-kube-api-access-z5qzm\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.424176 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-catalog-content\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.525562 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-utilities\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.525629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qzm\" (UniqueName: \"kubernetes.io/projected/6d874618-56f7-4b5a-a882-3631cb6ab5ee-kube-api-access-z5qzm\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.525650 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-catalog-content\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.526149 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-utilities\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.526202 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-catalog-content\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.545020 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qzm\" (UniqueName: \"kubernetes.io/projected/6d874618-56f7-4b5a-a882-3631cb6ab5ee-kube-api-access-z5qzm\") pod \"redhat-marketplace-mxcsv\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:58 crc kubenswrapper[4909]: I1126 07:33:58.590523 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:33:59 crc kubenswrapper[4909]: I1126 07:33:59.107309 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxcsv"] Nov 26 07:33:59 crc kubenswrapper[4909]: I1126 07:33:59.235699 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxcsv" event={"ID":"6d874618-56f7-4b5a-a882-3631cb6ab5ee","Type":"ContainerStarted","Data":"fa3a46fbaefe4e776bcaaa83a1b230c3363c3952c2861ce56d8cd4d1e6cc303c"} Nov 26 07:34:00 crc kubenswrapper[4909]: I1126 07:34:00.249395 4909 generic.go:334] "Generic (PLEG): container finished" podID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerID="3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e" exitCode=0 Nov 26 07:34:00 crc kubenswrapper[4909]: I1126 07:34:00.249475 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxcsv" event={"ID":"6d874618-56f7-4b5a-a882-3631cb6ab5ee","Type":"ContainerDied","Data":"3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e"} Nov 26 07:34:00 crc kubenswrapper[4909]: I1126 07:34:00.252728 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 07:34:01 crc kubenswrapper[4909]: I1126 07:34:01.257750 4909 generic.go:334] "Generic (PLEG): container finished" podID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerID="8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7" exitCode=0 Nov 26 07:34:01 crc kubenswrapper[4909]: I1126 07:34:01.257800 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxcsv" event={"ID":"6d874618-56f7-4b5a-a882-3631cb6ab5ee","Type":"ContainerDied","Data":"8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7"} Nov 26 07:34:02 crc kubenswrapper[4909]: I1126 07:34:02.267508 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxcsv" event={"ID":"6d874618-56f7-4b5a-a882-3631cb6ab5ee","Type":"ContainerStarted","Data":"22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c"} Nov 26 07:34:02 crc kubenswrapper[4909]: I1126 07:34:02.284705 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mxcsv" podStartSLOduration=2.838260215 podStartE2EDuration="4.284686479s" podCreationTimestamp="2025-11-26 07:33:58 +0000 UTC" firstStartedPulling="2025-11-26 07:34:00.252144576 +0000 UTC m=+2012.398355772" lastFinishedPulling="2025-11-26 07:34:01.69857085 +0000 UTC m=+2013.844782036" observedRunningTime="2025-11-26 07:34:02.28216019 +0000 UTC m=+2014.428371356" watchObservedRunningTime="2025-11-26 07:34:02.284686479 +0000 UTC m=+2014.430897645" Nov 26 07:34:07 crc kubenswrapper[4909]: I1126 07:34:07.301721 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:34:07 crc kubenswrapper[4909]: I1126 07:34:07.302391 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:34:08 crc kubenswrapper[4909]: I1126 07:34:08.591508 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:34:08 crc kubenswrapper[4909]: I1126 07:34:08.593656 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:34:08 crc kubenswrapper[4909]: I1126 07:34:08.644733 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:34:09 crc kubenswrapper[4909]: I1126 07:34:09.419464 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:34:09 crc kubenswrapper[4909]: I1126 07:34:09.479198 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxcsv"] Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.363431 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mxcsv" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="registry-server" containerID="cri-o://22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c" gracePeriod=2 Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.806306 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.930290 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-utilities\") pod \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.930377 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5qzm\" (UniqueName: \"kubernetes.io/projected/6d874618-56f7-4b5a-a882-3631cb6ab5ee-kube-api-access-z5qzm\") pod \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.930543 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-catalog-content\") pod \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\" (UID: \"6d874618-56f7-4b5a-a882-3631cb6ab5ee\") " Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.931353 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-utilities" (OuterVolumeSpecName: "utilities") pod "6d874618-56f7-4b5a-a882-3631cb6ab5ee" (UID: "6d874618-56f7-4b5a-a882-3631cb6ab5ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.939262 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d874618-56f7-4b5a-a882-3631cb6ab5ee-kube-api-access-z5qzm" (OuterVolumeSpecName: "kube-api-access-z5qzm") pod "6d874618-56f7-4b5a-a882-3631cb6ab5ee" (UID: "6d874618-56f7-4b5a-a882-3631cb6ab5ee"). InnerVolumeSpecName "kube-api-access-z5qzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:34:11 crc kubenswrapper[4909]: I1126 07:34:11.953802 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d874618-56f7-4b5a-a882-3631cb6ab5ee" (UID: "6d874618-56f7-4b5a-a882-3631cb6ab5ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.031918 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.032155 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d874618-56f7-4b5a-a882-3631cb6ab5ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.032219 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5qzm\" (UniqueName: \"kubernetes.io/projected/6d874618-56f7-4b5a-a882-3631cb6ab5ee-kube-api-access-z5qzm\") on node \"crc\" DevicePath \"\"" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.379792 4909 generic.go:334] "Generic (PLEG): container finished" podID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerID="22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c" exitCode=0 Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.379904 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxcsv" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.379904 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxcsv" event={"ID":"6d874618-56f7-4b5a-a882-3631cb6ab5ee","Type":"ContainerDied","Data":"22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c"} Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.380005 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxcsv" event={"ID":"6d874618-56f7-4b5a-a882-3631cb6ab5ee","Type":"ContainerDied","Data":"fa3a46fbaefe4e776bcaaa83a1b230c3363c3952c2861ce56d8cd4d1e6cc303c"} Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.380061 4909 scope.go:117] "RemoveContainer" containerID="22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.407496 4909 scope.go:117] "RemoveContainer" containerID="8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.445743 4909 scope.go:117] "RemoveContainer" containerID="3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.447332 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxcsv"] Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.461642 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxcsv"] Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.479050 4909 scope.go:117] "RemoveContainer" containerID="22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c" Nov 26 07:34:12 crc kubenswrapper[4909]: E1126 07:34:12.479649 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c\": container with ID starting with 22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c not found: ID does not exist" containerID="22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.479692 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c"} err="failed to get container status \"22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c\": rpc error: code = NotFound desc = could not find container \"22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c\": container with ID starting with 22560427be649a7c0861e4a2d86674fba46b977248ffb69aa62e7c8b81638e5c not found: ID does not exist" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.479719 4909 scope.go:117] "RemoveContainer" containerID="8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7" Nov 26 07:34:12 crc kubenswrapper[4909]: E1126 07:34:12.480124 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7\": container with ID starting with 8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7 not found: ID does not exist" containerID="8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.480213 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7"} err="failed to get container status \"8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7\": rpc error: code = NotFound desc = could not find container \"8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7\": container with ID starting with 8e524e11f67dd2e3c49e844f99e70c42590172347aa59a656947023b9faf1ff7 not found: ID does not exist" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.480254 4909 scope.go:117] "RemoveContainer" containerID="3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e" Nov 26 07:34:12 crc kubenswrapper[4909]: E1126 07:34:12.480734 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e\": container with ID starting with 3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e not found: ID does not exist" containerID="3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.480798 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e"} err="failed to get container status \"3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e\": rpc error: code = NotFound desc = could not find container \"3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e\": container with ID starting with 3af4789f77963258a30356c0d041f3525c7473946aac39c68deb81eb8413c10e not found: ID does not exist" Nov 26 07:34:12 crc kubenswrapper[4909]: I1126 07:34:12.516652 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" path="/var/lib/kubelet/pods/6d874618-56f7-4b5a-a882-3631cb6ab5ee/volumes" Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.300569 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.301109 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.301163 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.301726 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"682c9fe04226d542318eba43fef0a14ef494d1e8654a34321acd7471f2fa933e"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.301779 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://682c9fe04226d542318eba43fef0a14ef494d1e8654a34321acd7471f2fa933e" gracePeriod=600 Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.626103 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="682c9fe04226d542318eba43fef0a14ef494d1e8654a34321acd7471f2fa933e" exitCode=0 Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.626187 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"682c9fe04226d542318eba43fef0a14ef494d1e8654a34321acd7471f2fa933e"} Nov 26 07:34:37 crc kubenswrapper[4909]: I1126 07:34:37.626363 4909 scope.go:117] "RemoveContainer" containerID="2207939d57c282392282b75820bbde347cad8c78bd10b8f4fb31f3d035e4c246" Nov 26 07:34:38 crc kubenswrapper[4909]: I1126 07:34:38.643290 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52"} Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.435011 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gwz87"] Nov 26 07:35:40 crc kubenswrapper[4909]: E1126 07:35:40.436146 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="extract-utilities" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.436168 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="extract-utilities" Nov 26 07:35:40 crc kubenswrapper[4909]: E1126 07:35:40.436198 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="registry-server" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.436207 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="registry-server" Nov 26 07:35:40 crc kubenswrapper[4909]: E1126 07:35:40.436224 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="extract-content" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.436238 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="extract-content" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.436502 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d874618-56f7-4b5a-a882-3631cb6ab5ee" containerName="registry-server" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.438170 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.448305 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gwz87"] Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.540731 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97bjx\" (UniqueName: \"kubernetes.io/projected/38a11814-c9cd-4f17-b48f-81706ff471e6-kube-api-access-97bjx\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.540886 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-catalog-content\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.540908 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-utilities\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.642100 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97bjx\" (UniqueName: \"kubernetes.io/projected/38a11814-c9cd-4f17-b48f-81706ff471e6-kube-api-access-97bjx\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.642223 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-catalog-content\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.642247 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-utilities\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.642798 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-catalog-content\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.642950 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-utilities\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.663362 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97bjx\" (UniqueName: \"kubernetes.io/projected/38a11814-c9cd-4f17-b48f-81706ff471e6-kube-api-access-97bjx\") pod \"redhat-operators-gwz87\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:40 crc kubenswrapper[4909]: I1126 07:35:40.758337 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:41 crc kubenswrapper[4909]: I1126 07:35:41.283310 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gwz87"] Nov 26 07:35:41 crc kubenswrapper[4909]: W1126 07:35:41.286253 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a11814_c9cd_4f17_b48f_81706ff471e6.slice/crio-433f5727727c131dd61f0866d8212534b2a9d3b8a3eb60e04d56bcf164fb76f8 WatchSource:0}: Error finding container 433f5727727c131dd61f0866d8212534b2a9d3b8a3eb60e04d56bcf164fb76f8: Status 404 returned error can't find the container with id 433f5727727c131dd61f0866d8212534b2a9d3b8a3eb60e04d56bcf164fb76f8 Nov 26 07:35:42 crc kubenswrapper[4909]: I1126 07:35:42.178296 4909 generic.go:334] "Generic (PLEG): container finished" podID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerID="4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c" exitCode=0 Nov 26 07:35:42 crc kubenswrapper[4909]: I1126 07:35:42.178671 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwz87" event={"ID":"38a11814-c9cd-4f17-b48f-81706ff471e6","Type":"ContainerDied","Data":"4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c"} Nov 26 07:35:42 crc kubenswrapper[4909]: I1126 07:35:42.178726 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwz87" event={"ID":"38a11814-c9cd-4f17-b48f-81706ff471e6","Type":"ContainerStarted","Data":"433f5727727c131dd61f0866d8212534b2a9d3b8a3eb60e04d56bcf164fb76f8"} Nov 26 07:35:43 crc kubenswrapper[4909]: I1126 07:35:43.187706 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwz87" event={"ID":"38a11814-c9cd-4f17-b48f-81706ff471e6","Type":"ContainerStarted","Data":"15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd"} Nov 26 07:35:44 crc kubenswrapper[4909]: I1126 07:35:44.197367 4909 generic.go:334] "Generic (PLEG): container finished" podID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerID="15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd" exitCode=0 Nov 26 07:35:44 crc kubenswrapper[4909]: I1126 07:35:44.197401 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwz87" event={"ID":"38a11814-c9cd-4f17-b48f-81706ff471e6","Type":"ContainerDied","Data":"15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd"} Nov 26 07:35:45 crc kubenswrapper[4909]: I1126 07:35:45.210145 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwz87" event={"ID":"38a11814-c9cd-4f17-b48f-81706ff471e6","Type":"ContainerStarted","Data":"50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757"} Nov 26 07:35:45 crc kubenswrapper[4909]: I1126 07:35:45.232924 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gwz87" podStartSLOduration=2.853163924 podStartE2EDuration="5.232905283s" podCreationTimestamp="2025-11-26 07:35:40 +0000 UTC" firstStartedPulling="2025-11-26 07:35:42.181686428 +0000 UTC m=+2114.327897634" lastFinishedPulling="2025-11-26 07:35:44.561427807 +0000 UTC m=+2116.707638993" observedRunningTime="2025-11-26 07:35:45.230064045 +0000 UTC m=+2117.376275211" watchObservedRunningTime="2025-11-26 07:35:45.232905283 +0000 UTC m=+2117.379116459" Nov 26 07:35:50 crc kubenswrapper[4909]: I1126 07:35:50.759614 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:50 crc kubenswrapper[4909]: I1126 07:35:50.759659 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:50 crc kubenswrapper[4909]: I1126 07:35:50.818041 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:51 crc kubenswrapper[4909]: I1126 07:35:51.305631 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:51 crc kubenswrapper[4909]: I1126 07:35:51.365187 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gwz87"] Nov 26 07:35:53 crc kubenswrapper[4909]: I1126 07:35:53.276982 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gwz87" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="registry-server" containerID="cri-o://50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757" gracePeriod=2 Nov 26 07:35:53 crc kubenswrapper[4909]: I1126 07:35:53.747089 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:53 crc kubenswrapper[4909]: I1126 07:35:53.936276 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97bjx\" (UniqueName: \"kubernetes.io/projected/38a11814-c9cd-4f17-b48f-81706ff471e6-kube-api-access-97bjx\") pod \"38a11814-c9cd-4f17-b48f-81706ff471e6\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " Nov 26 07:35:53 crc kubenswrapper[4909]: I1126 07:35:53.936355 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-utilities\") pod \"38a11814-c9cd-4f17-b48f-81706ff471e6\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " Nov 26 07:35:53 crc kubenswrapper[4909]: I1126 07:35:53.936475 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-catalog-content\") pod \"38a11814-c9cd-4f17-b48f-81706ff471e6\" (UID: \"38a11814-c9cd-4f17-b48f-81706ff471e6\") " Nov 26 07:35:53 crc kubenswrapper[4909]: I1126 07:35:53.938843 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-utilities" (OuterVolumeSpecName: "utilities") pod "38a11814-c9cd-4f17-b48f-81706ff471e6" (UID: "38a11814-c9cd-4f17-b48f-81706ff471e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:35:53 crc kubenswrapper[4909]: I1126 07:35:53.950890 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a11814-c9cd-4f17-b48f-81706ff471e6-kube-api-access-97bjx" (OuterVolumeSpecName: "kube-api-access-97bjx") pod "38a11814-c9cd-4f17-b48f-81706ff471e6" (UID: "38a11814-c9cd-4f17-b48f-81706ff471e6"). InnerVolumeSpecName "kube-api-access-97bjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.038645 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97bjx\" (UniqueName: \"kubernetes.io/projected/38a11814-c9cd-4f17-b48f-81706ff471e6-kube-api-access-97bjx\") on node \"crc\" DevicePath \"\"" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.038693 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.287087 4909 generic.go:334] "Generic (PLEG): container finished" podID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerID="50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757" exitCode=0 Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.287132 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwz87" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.287168 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwz87" event={"ID":"38a11814-c9cd-4f17-b48f-81706ff471e6","Type":"ContainerDied","Data":"50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757"} Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.287250 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwz87" event={"ID":"38a11814-c9cd-4f17-b48f-81706ff471e6","Type":"ContainerDied","Data":"433f5727727c131dd61f0866d8212534b2a9d3b8a3eb60e04d56bcf164fb76f8"} Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.287274 4909 scope.go:117] "RemoveContainer" containerID="50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.314449 4909 scope.go:117] "RemoveContainer" containerID="15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.340674 4909 scope.go:117] "RemoveContainer" containerID="4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.370136 4909 scope.go:117] "RemoveContainer" containerID="50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757" Nov 26 07:35:54 crc kubenswrapper[4909]: E1126 07:35:54.370618 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757\": container with ID starting with 50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757 not found: ID does not exist" containerID="50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.370654 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757"} err="failed to get container status \"50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757\": rpc error: code = NotFound desc = could not find container \"50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757\": container with ID starting with 50b0ef06b1960740ebe08a1aa56c7e6264a51cbad625609e4ef7b87993600757 not found: ID does not exist" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.370676 4909 scope.go:117] "RemoveContainer" containerID="15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd" Nov 26 07:35:54 crc kubenswrapper[4909]: E1126 07:35:54.371244 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd\": container with ID starting with 15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd not found: ID does not exist" containerID="15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.371392 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd"} err="failed to get container status \"15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd\": rpc error: code = NotFound desc = could not find container \"15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd\": container with ID starting with 15be7818818f08fbcfa763a3d4eea4cbb73eda762470ef89a8ef708f9142adcd not found: ID does not exist" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.371552 4909 scope.go:117] "RemoveContainer" containerID="4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c" Nov 26 07:35:54 crc kubenswrapper[4909]: E1126 07:35:54.371994 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c\": container with ID starting with 4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c not found: ID does not exist" containerID="4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.372023 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c"} err="failed to get container status \"4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c\": rpc error: code = NotFound desc = could not find container \"4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c\": container with ID starting with 4594fb8a7a7d00acc993ef53b1acd04a76061e2c92833e39db888311659bc27c not found: ID does not exist" Nov 26 07:35:54 crc kubenswrapper[4909]: I1126 07:35:54.982855 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38a11814-c9cd-4f17-b48f-81706ff471e6" (UID: "38a11814-c9cd-4f17-b48f-81706ff471e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:35:55 crc kubenswrapper[4909]: I1126 07:35:55.052860 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a11814-c9cd-4f17-b48f-81706ff471e6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:35:55 crc kubenswrapper[4909]: I1126 07:35:55.237939 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gwz87"] Nov 26 07:35:55 crc kubenswrapper[4909]: I1126 07:35:55.249459 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gwz87"] Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.473549 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p5mbm"] Nov 26 07:35:56 crc kubenswrapper[4909]: E1126 07:35:56.475586 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="extract-content" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.475836 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="extract-content" Nov 26 07:35:56 crc kubenswrapper[4909]: E1126 07:35:56.476053 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="extract-utilities" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.476248 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="extract-utilities" Nov 26 07:35:56 crc kubenswrapper[4909]: E1126 07:35:56.476482 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="registry-server" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.476730 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="registry-server" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.477344 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" containerName="registry-server" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.479668 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.485765 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p5mbm"] Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.536188 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a11814-c9cd-4f17-b48f-81706ff471e6" path="/var/lib/kubelet/pods/38a11814-c9cd-4f17-b48f-81706ff471e6/volumes" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.674162 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-utilities\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.674239 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4x5l\" (UniqueName: \"kubernetes.io/projected/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-kube-api-access-h4x5l\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.674287 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-catalog-content\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.775837 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-utilities\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.776225 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4x5l\" (UniqueName: \"kubernetes.io/projected/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-kube-api-access-h4x5l\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.776390 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-catalog-content\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.776694 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-utilities\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.777064 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-catalog-content\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.797067 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4x5l\" (UniqueName: \"kubernetes.io/projected/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-kube-api-access-h4x5l\") pod \"certified-operators-p5mbm\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:56 crc kubenswrapper[4909]: I1126 07:35:56.802576 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:35:57 crc kubenswrapper[4909]: I1126 07:35:57.321517 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p5mbm"] Nov 26 07:35:58 crc kubenswrapper[4909]: I1126 07:35:58.321902 4909 generic.go:334] "Generic (PLEG): container finished" podID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerID="58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211" exitCode=0 Nov 26 07:35:58 crc kubenswrapper[4909]: I1126 07:35:58.321966 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5mbm" event={"ID":"d42f6f04-b82b-4ddd-b485-995ebe2f17d7","Type":"ContainerDied","Data":"58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211"} Nov 26 07:35:58 crc kubenswrapper[4909]: I1126 07:35:58.322330 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5mbm" event={"ID":"d42f6f04-b82b-4ddd-b485-995ebe2f17d7","Type":"ContainerStarted","Data":"4b977285524eff2fc2178e7b24731bd4a5ee7658a5743db639a4f52f86ef70b9"} Nov 26 07:35:59 crc kubenswrapper[4909]: I1126 07:35:59.331309 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5mbm" event={"ID":"d42f6f04-b82b-4ddd-b485-995ebe2f17d7","Type":"ContainerStarted","Data":"ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16"} Nov 26 07:36:00 crc kubenswrapper[4909]: I1126 07:36:00.340047 4909 generic.go:334] "Generic (PLEG): container finished" podID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerID="ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16" exitCode=0 Nov 26 07:36:00 crc kubenswrapper[4909]: I1126 07:36:00.340093 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5mbm" event={"ID":"d42f6f04-b82b-4ddd-b485-995ebe2f17d7","Type":"ContainerDied","Data":"ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16"} Nov 26 07:36:01 crc kubenswrapper[4909]: I1126 07:36:01.352841 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5mbm" event={"ID":"d42f6f04-b82b-4ddd-b485-995ebe2f17d7","Type":"ContainerStarted","Data":"83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173"} Nov 26 07:36:01 crc kubenswrapper[4909]: I1126 07:36:01.379785 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p5mbm" podStartSLOduration=2.946965407 podStartE2EDuration="5.379758873s" podCreationTimestamp="2025-11-26 07:35:56 +0000 UTC" firstStartedPulling="2025-11-26 07:35:58.323724216 +0000 UTC m=+2130.469935392" lastFinishedPulling="2025-11-26 07:36:00.756517652 +0000 UTC m=+2132.902728858" observedRunningTime="2025-11-26 07:36:01.376642128 +0000 UTC m=+2133.522853324" watchObservedRunningTime="2025-11-26 07:36:01.379758873 +0000 UTC m=+2133.525970069" Nov 26 07:36:06 crc kubenswrapper[4909]: I1126 07:36:06.803708 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:36:06 crc kubenswrapper[4909]: I1126 07:36:06.805977 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:36:06 crc kubenswrapper[4909]: I1126 07:36:06.919757 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:36:07 crc kubenswrapper[4909]: I1126 07:36:07.440149 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:36:07 crc kubenswrapper[4909]: I1126 07:36:07.487985 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p5mbm"] Nov 26 07:36:09 crc kubenswrapper[4909]: I1126 07:36:09.410274 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p5mbm" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="registry-server" containerID="cri-o://83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173" gracePeriod=2 Nov 26 07:36:09 crc kubenswrapper[4909]: I1126 07:36:09.855411 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:36:09 crc kubenswrapper[4909]: I1126 07:36:09.979819 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-catalog-content\") pod \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " Nov 26 07:36:09 crc kubenswrapper[4909]: I1126 07:36:09.979954 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4x5l\" (UniqueName: \"kubernetes.io/projected/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-kube-api-access-h4x5l\") pod \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " Nov 26 07:36:09 crc kubenswrapper[4909]: I1126 07:36:09.980016 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-utilities\") pod \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\" (UID: \"d42f6f04-b82b-4ddd-b485-995ebe2f17d7\") " Nov 26 07:36:09 crc kubenswrapper[4909]: I1126 07:36:09.981961 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-utilities" (OuterVolumeSpecName: "utilities") pod "d42f6f04-b82b-4ddd-b485-995ebe2f17d7" (UID: "d42f6f04-b82b-4ddd-b485-995ebe2f17d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:36:09 crc kubenswrapper[4909]: I1126 07:36:09.986698 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-kube-api-access-h4x5l" (OuterVolumeSpecName: "kube-api-access-h4x5l") pod "d42f6f04-b82b-4ddd-b485-995ebe2f17d7" (UID: "d42f6f04-b82b-4ddd-b485-995ebe2f17d7"). InnerVolumeSpecName "kube-api-access-h4x5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.050102 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d42f6f04-b82b-4ddd-b485-995ebe2f17d7" (UID: "d42f6f04-b82b-4ddd-b485-995ebe2f17d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.081840 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.081877 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4x5l\" (UniqueName: \"kubernetes.io/projected/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-kube-api-access-h4x5l\") on node \"crc\" DevicePath \"\"" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.081889 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d42f6f04-b82b-4ddd-b485-995ebe2f17d7-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.417200 4909 generic.go:334] "Generic (PLEG): container finished" podID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerID="83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173" exitCode=0 Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.417240 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5mbm" event={"ID":"d42f6f04-b82b-4ddd-b485-995ebe2f17d7","Type":"ContainerDied","Data":"83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173"} Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.417269 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5mbm" event={"ID":"d42f6f04-b82b-4ddd-b485-995ebe2f17d7","Type":"ContainerDied","Data":"4b977285524eff2fc2178e7b24731bd4a5ee7658a5743db639a4f52f86ef70b9"} Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.417274 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5mbm" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.417286 4909 scope.go:117] "RemoveContainer" containerID="83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.448487 4909 scope.go:117] "RemoveContainer" containerID="ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.471712 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p5mbm"] Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.476930 4909 scope.go:117] "RemoveContainer" containerID="58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.479612 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p5mbm"] Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.510453 4909 scope.go:117] "RemoveContainer" containerID="83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173" Nov 26 07:36:10 crc kubenswrapper[4909]: E1126 07:36:10.510954 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173\": container with ID starting with 83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173 not found: ID does not exist" containerID="83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.511021 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173"} err="failed to get container status \"83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173\": rpc error: code = NotFound desc = could not find container \"83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173\": container with ID starting with 83920d4d7d35c12e29c302b86dc2b5b40f69fd05c41b0cea6786eb56cfc09173 not found: ID does not exist" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.511091 4909 scope.go:117] "RemoveContainer" containerID="ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16" Nov 26 07:36:10 crc kubenswrapper[4909]: E1126 07:36:10.511915 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16\": container with ID starting with ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16 not found: ID does not exist" containerID="ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.511940 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16"} err="failed to get container status \"ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16\": rpc error: code = NotFound desc = could not find container \"ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16\": container with ID starting with ec9cabd47259cfb0d58a2f2dedde7910142a018f98fa35594bd4363c28d3fb16 not found: ID does not exist" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.511959 4909 scope.go:117] "RemoveContainer" containerID="58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211" Nov 26 07:36:10 crc kubenswrapper[4909]: E1126 07:36:10.512308 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211\": container with ID starting with 58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211 not found: ID does not exist" containerID="58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.512354 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211"} err="failed to get container status \"58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211\": rpc error: code = NotFound desc = could not find container \"58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211\": container with ID starting with 58d32815f4478dc71c9963760f5ab1830e333bd816778b20014d3b41268ea211 not found: ID does not exist" Nov 26 07:36:10 crc kubenswrapper[4909]: I1126 07:36:10.516708 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" path="/var/lib/kubelet/pods/d42f6f04-b82b-4ddd-b485-995ebe2f17d7/volumes" Nov 26 07:36:37 crc kubenswrapper[4909]: I1126 07:36:37.301330 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:36:37 crc kubenswrapper[4909]: I1126 07:36:37.302705 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.409745 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5v5p5"] Nov 26 07:37:01 crc kubenswrapper[4909]: E1126 07:37:01.410424 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="extract-utilities" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.410441 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="extract-utilities" Nov 26 07:37:01 crc kubenswrapper[4909]: E1126 07:37:01.410465 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="registry-server" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.410474 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="registry-server" Nov 26 07:37:01 crc kubenswrapper[4909]: E1126 07:37:01.410495 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="extract-content" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.410503 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="extract-content" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.410709 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42f6f04-b82b-4ddd-b485-995ebe2f17d7" containerName="registry-server" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.412016 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.431758 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5v5p5"] Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.585830 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-utilities\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.585887 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-catalog-content\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.586231 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5jh5\" (UniqueName: \"kubernetes.io/projected/ea69a477-af9a-44e5-973c-63b8ce1b105f-kube-api-access-q5jh5\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.687077 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-utilities\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.687126 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-catalog-content\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.687209 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5jh5\" (UniqueName: \"kubernetes.io/projected/ea69a477-af9a-44e5-973c-63b8ce1b105f-kube-api-access-q5jh5\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.687741 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-utilities\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.687878 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-catalog-content\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.718610 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5jh5\" (UniqueName: \"kubernetes.io/projected/ea69a477-af9a-44e5-973c-63b8ce1b105f-kube-api-access-q5jh5\") pod \"community-operators-5v5p5\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:01 crc kubenswrapper[4909]: I1126 07:37:01.738876 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:02 crc kubenswrapper[4909]: I1126 07:37:02.208910 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5v5p5"] Nov 26 07:37:02 crc kubenswrapper[4909]: I1126 07:37:02.923032 4909 generic.go:334] "Generic (PLEG): container finished" podID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerID="10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf" exitCode=0 Nov 26 07:37:02 crc kubenswrapper[4909]: I1126 07:37:02.923077 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v5p5" event={"ID":"ea69a477-af9a-44e5-973c-63b8ce1b105f","Type":"ContainerDied","Data":"10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf"} Nov 26 07:37:02 crc kubenswrapper[4909]: I1126 07:37:02.923103 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v5p5" event={"ID":"ea69a477-af9a-44e5-973c-63b8ce1b105f","Type":"ContainerStarted","Data":"9a24cfe7ad86c869db69264c158bed24589094b26d57d4c42c643ef4c3a02336"} Nov 26 07:37:03 crc kubenswrapper[4909]: I1126 07:37:03.945446 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v5p5" event={"ID":"ea69a477-af9a-44e5-973c-63b8ce1b105f","Type":"ContainerStarted","Data":"aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311"} Nov 26 07:37:04 crc kubenswrapper[4909]: I1126 07:37:04.957037 4909 generic.go:334] "Generic (PLEG): container finished" podID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerID="aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311" exitCode=0 Nov 26 07:37:04 crc kubenswrapper[4909]: I1126 07:37:04.957105 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v5p5" event={"ID":"ea69a477-af9a-44e5-973c-63b8ce1b105f","Type":"ContainerDied","Data":"aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311"} Nov 26 07:37:05 crc kubenswrapper[4909]: I1126 07:37:05.967355 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v5p5" event={"ID":"ea69a477-af9a-44e5-973c-63b8ce1b105f","Type":"ContainerStarted","Data":"c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370"} Nov 26 07:37:05 crc kubenswrapper[4909]: I1126 07:37:05.993307 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5v5p5" podStartSLOduration=2.537785436 podStartE2EDuration="4.993291821s" podCreationTimestamp="2025-11-26 07:37:01 +0000 UTC" firstStartedPulling="2025-11-26 07:37:02.925561475 +0000 UTC m=+2195.071772651" lastFinishedPulling="2025-11-26 07:37:05.38106787 +0000 UTC m=+2197.527279036" observedRunningTime="2025-11-26 07:37:05.990374481 +0000 UTC m=+2198.136585657" watchObservedRunningTime="2025-11-26 07:37:05.993291821 +0000 UTC m=+2198.139502987" Nov 26 07:37:07 crc kubenswrapper[4909]: I1126 07:37:07.301571 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:37:07 crc kubenswrapper[4909]: I1126 07:37:07.301695 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:37:11 crc kubenswrapper[4909]: I1126 07:37:11.740215 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:11 crc kubenswrapper[4909]: I1126 07:37:11.740866 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:11 crc kubenswrapper[4909]: I1126 07:37:11.810064 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:12 crc kubenswrapper[4909]: I1126 07:37:12.079971 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:12 crc kubenswrapper[4909]: I1126 07:37:12.127746 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5v5p5"] Nov 26 07:37:14 crc kubenswrapper[4909]: I1126 07:37:14.038830 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5v5p5" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="registry-server" containerID="cri-o://c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370" gracePeriod=2 Nov 26 07:37:14 crc kubenswrapper[4909]: I1126 07:37:14.976852 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.048991 4909 generic.go:334] "Generic (PLEG): container finished" podID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerID="c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370" exitCode=0 Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.049036 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v5p5" event={"ID":"ea69a477-af9a-44e5-973c-63b8ce1b105f","Type":"ContainerDied","Data":"c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370"} Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.049063 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v5p5" event={"ID":"ea69a477-af9a-44e5-973c-63b8ce1b105f","Type":"ContainerDied","Data":"9a24cfe7ad86c869db69264c158bed24589094b26d57d4c42c643ef4c3a02336"} Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.049082 4909 scope.go:117] "RemoveContainer" containerID="c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.049149 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v5p5" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.073532 4909 scope.go:117] "RemoveContainer" containerID="aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.088503 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5jh5\" (UniqueName: \"kubernetes.io/projected/ea69a477-af9a-44e5-973c-63b8ce1b105f-kube-api-access-q5jh5\") pod \"ea69a477-af9a-44e5-973c-63b8ce1b105f\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.088790 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-catalog-content\") pod \"ea69a477-af9a-44e5-973c-63b8ce1b105f\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.088840 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-utilities\") pod \"ea69a477-af9a-44e5-973c-63b8ce1b105f\" (UID: \"ea69a477-af9a-44e5-973c-63b8ce1b105f\") " Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.090236 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-utilities" (OuterVolumeSpecName: "utilities") pod "ea69a477-af9a-44e5-973c-63b8ce1b105f" (UID: "ea69a477-af9a-44e5-973c-63b8ce1b105f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.096692 4909 scope.go:117] "RemoveContainer" containerID="10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.097837 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea69a477-af9a-44e5-973c-63b8ce1b105f-kube-api-access-q5jh5" (OuterVolumeSpecName: "kube-api-access-q5jh5") pod "ea69a477-af9a-44e5-973c-63b8ce1b105f" (UID: "ea69a477-af9a-44e5-973c-63b8ce1b105f"). InnerVolumeSpecName "kube-api-access-q5jh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.141953 4909 scope.go:117] "RemoveContainer" containerID="c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370" Nov 26 07:37:15 crc kubenswrapper[4909]: E1126 07:37:15.142383 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370\": container with ID starting with c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370 not found: ID does not exist" containerID="c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.142409 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370"} err="failed to get container status \"c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370\": rpc error: code = NotFound desc = could not find container \"c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370\": container with ID starting with c8a07a9b2aa1f15729551429539e97dc2f8cf7b44bf97f9eb7cf8deb14c04370 not found: ID does not exist" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.142426 4909 scope.go:117] "RemoveContainer" containerID="aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311" Nov 26 07:37:15 crc kubenswrapper[4909]: E1126 07:37:15.142742 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311\": container with ID starting with aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311 not found: ID does not exist" containerID="aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.142765 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311"} err="failed to get container status \"aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311\": rpc error: code = NotFound desc = could not find container \"aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311\": container with ID starting with aa234c614bdd86ce300bc61156e47277be20681eeff47f7da1ffc6a60259c311 not found: ID does not exist" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.142779 4909 scope.go:117] "RemoveContainer" containerID="10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf" Nov 26 07:37:15 crc kubenswrapper[4909]: E1126 07:37:15.143017 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf\": container with ID starting with 10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf not found: ID does not exist" containerID="10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.143042 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf"} err="failed to get container status \"10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf\": rpc error: code = NotFound desc = could not find container \"10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf\": container with ID starting with 10668e6e8f29f5f911724dda0497d6c4908bca022ccb1e48740ab6450bfe29bf not found: ID does not exist" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.161582 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea69a477-af9a-44e5-973c-63b8ce1b105f" (UID: "ea69a477-af9a-44e5-973c-63b8ce1b105f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.190624 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.190658 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea69a477-af9a-44e5-973c-63b8ce1b105f-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.190671 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5jh5\" (UniqueName: \"kubernetes.io/projected/ea69a477-af9a-44e5-973c-63b8ce1b105f-kube-api-access-q5jh5\") on node \"crc\" DevicePath \"\"" Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.386229 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5v5p5"] Nov 26 07:37:15 crc kubenswrapper[4909]: I1126 07:37:15.392358 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5v5p5"] Nov 26 07:37:16 crc kubenswrapper[4909]: I1126 07:37:16.510516 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" path="/var/lib/kubelet/pods/ea69a477-af9a-44e5-973c-63b8ce1b105f/volumes" Nov 26 07:37:37 crc kubenswrapper[4909]: I1126 07:37:37.301509 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:37:37 crc kubenswrapper[4909]: I1126 07:37:37.302199 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:37:37 crc kubenswrapper[4909]: I1126 07:37:37.302254 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:37:37 crc kubenswrapper[4909]: I1126 07:37:37.302907 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:37:37 crc kubenswrapper[4909]: I1126 07:37:37.302974 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" gracePeriod=600 Nov 26 07:37:37 crc kubenswrapper[4909]: E1126 07:37:37.435011 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:37:38 crc kubenswrapper[4909]: I1126 07:37:38.248829 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" exitCode=0 Nov 26 07:37:38 crc kubenswrapper[4909]: I1126 07:37:38.248888 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52"} Nov 26 07:37:38 crc kubenswrapper[4909]: I1126 07:37:38.248921 4909 scope.go:117] "RemoveContainer" containerID="682c9fe04226d542318eba43fef0a14ef494d1e8654a34321acd7471f2fa933e" Nov 26 07:37:38 crc kubenswrapper[4909]: I1126 07:37:38.249391 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:37:38 crc kubenswrapper[4909]: E1126 07:37:38.249760 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:37:52 crc kubenswrapper[4909]: I1126 07:37:52.500164 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:37:52 crc kubenswrapper[4909]: E1126 07:37:52.501249 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:38:04 crc kubenswrapper[4909]: I1126 07:38:04.500014 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:38:04 crc kubenswrapper[4909]: E1126 07:38:04.502568 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:38:18 crc kubenswrapper[4909]: I1126 07:38:18.504532 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:38:18 crc kubenswrapper[4909]: E1126 07:38:18.505326 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:38:32 crc kubenswrapper[4909]: I1126 07:38:32.498712 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:38:32 crc kubenswrapper[4909]: E1126 07:38:32.499422 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:38:43 crc kubenswrapper[4909]: I1126 07:38:43.499037 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:38:43 crc kubenswrapper[4909]: E1126 07:38:43.500176 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:38:54 crc kubenswrapper[4909]: I1126 07:38:54.499751 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:38:54 crc kubenswrapper[4909]: E1126 07:38:54.501235 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:39:09 crc kubenswrapper[4909]: I1126 07:39:09.498700 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:39:09 crc kubenswrapper[4909]: E1126 07:39:09.499377 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:39:22 crc kubenswrapper[4909]: I1126 07:39:22.499237 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:39:22 crc kubenswrapper[4909]: E1126 07:39:22.500197 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:39:35 crc kubenswrapper[4909]: I1126 07:39:35.499364 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:39:35 crc kubenswrapper[4909]: E1126 07:39:35.500538 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:39:49 crc kubenswrapper[4909]: I1126 07:39:49.499320 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:39:49 crc kubenswrapper[4909]: E1126 07:39:49.500413 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:40:00 crc kubenswrapper[4909]: I1126 07:40:00.498534 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:40:00 crc kubenswrapper[4909]: E1126 07:40:00.499495 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:40:12 crc kubenswrapper[4909]: I1126 07:40:12.498559 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:40:12 crc kubenswrapper[4909]: E1126 07:40:12.500802 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:40:26 crc kubenswrapper[4909]: I1126 07:40:26.499640 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:40:26 crc kubenswrapper[4909]: E1126 07:40:26.500474 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:40:40 crc kubenswrapper[4909]: I1126 07:40:40.501222 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:40:40 crc kubenswrapper[4909]: E1126 07:40:40.502458 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:40:55 crc kubenswrapper[4909]: I1126 07:40:55.499071 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:40:55 crc kubenswrapper[4909]: E1126 07:40:55.500166 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:41:09 crc kubenswrapper[4909]: I1126 07:41:09.499015 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:41:09 crc kubenswrapper[4909]: E1126 07:41:09.499861 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:41:20 crc kubenswrapper[4909]: I1126 07:41:20.503767 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:41:20 crc kubenswrapper[4909]: E1126 07:41:20.504572 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:41:32 crc kubenswrapper[4909]: I1126 07:41:32.498789 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:41:32 crc kubenswrapper[4909]: E1126 07:41:32.501105 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:41:44 crc kubenswrapper[4909]: I1126 07:41:44.500232 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:41:44 crc kubenswrapper[4909]: E1126 07:41:44.501667 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:41:55 crc kubenswrapper[4909]: I1126 07:41:55.498875 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:41:55 crc kubenswrapper[4909]: E1126 07:41:55.499850 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:42:10 crc kubenswrapper[4909]: I1126 07:42:10.499328 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:42:10 crc kubenswrapper[4909]: E1126 07:42:10.500731 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:42:24 crc kubenswrapper[4909]: I1126 07:42:24.499437 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:42:24 crc kubenswrapper[4909]: E1126 07:42:24.500233 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:42:38 crc kubenswrapper[4909]: I1126 07:42:38.503634 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:42:39 crc kubenswrapper[4909]: I1126 07:42:39.596811 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"aa2dc649f26bd2e1c0e531ea6c4de05ff32b5c2300a83eefa8dd6a1922d306b6"} Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.514251 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gz2gs"] Nov 26 07:44:34 crc kubenswrapper[4909]: E1126 07:44:34.515137 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="extract-content" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.515149 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="extract-content" Nov 26 07:44:34 crc kubenswrapper[4909]: E1126 07:44:34.515186 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="extract-utilities" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.515193 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="extract-utilities" Nov 26 07:44:34 crc kubenswrapper[4909]: E1126 07:44:34.515200 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="registry-server" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.515205 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="registry-server" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.515346 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea69a477-af9a-44e5-973c-63b8ce1b105f" containerName="registry-server" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.516704 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.526170 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz2gs"] Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.667091 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-utilities\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.667169 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd25c\" (UniqueName: \"kubernetes.io/projected/ace92610-3282-430c-8a19-3efab25d27b1-kube-api-access-rd25c\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.667202 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-catalog-content\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.770075 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd25c\" (UniqueName: \"kubernetes.io/projected/ace92610-3282-430c-8a19-3efab25d27b1-kube-api-access-rd25c\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.770172 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-catalog-content\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.770274 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-utilities\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.770851 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-catalog-content\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.771095 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-utilities\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.790020 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd25c\" (UniqueName: \"kubernetes.io/projected/ace92610-3282-430c-8a19-3efab25d27b1-kube-api-access-rd25c\") pod \"redhat-marketplace-gz2gs\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:34 crc kubenswrapper[4909]: I1126 07:44:34.833944 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:35 crc kubenswrapper[4909]: W1126 07:44:35.304843 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace92610_3282_430c_8a19_3efab25d27b1.slice/crio-3258f1b30c279e7688bc8f7376477029cddfccd92b7d8cf97fdfea6725d9b3b4 WatchSource:0}: Error finding container 3258f1b30c279e7688bc8f7376477029cddfccd92b7d8cf97fdfea6725d9b3b4: Status 404 returned error can't find the container with id 3258f1b30c279e7688bc8f7376477029cddfccd92b7d8cf97fdfea6725d9b3b4 Nov 26 07:44:35 crc kubenswrapper[4909]: I1126 07:44:35.306106 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz2gs"] Nov 26 07:44:35 crc kubenswrapper[4909]: I1126 07:44:35.636514 4909 generic.go:334] "Generic (PLEG): container finished" podID="ace92610-3282-430c-8a19-3efab25d27b1" containerID="912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c" exitCode=0 Nov 26 07:44:35 crc kubenswrapper[4909]: I1126 07:44:35.636671 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz2gs" event={"ID":"ace92610-3282-430c-8a19-3efab25d27b1","Type":"ContainerDied","Data":"912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c"} Nov 26 07:44:35 crc kubenswrapper[4909]: I1126 07:44:35.638239 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz2gs" event={"ID":"ace92610-3282-430c-8a19-3efab25d27b1","Type":"ContainerStarted","Data":"3258f1b30c279e7688bc8f7376477029cddfccd92b7d8cf97fdfea6725d9b3b4"} Nov 26 07:44:35 crc kubenswrapper[4909]: I1126 07:44:35.638830 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 07:44:36 crc kubenswrapper[4909]: I1126 07:44:36.646111 4909 generic.go:334] "Generic (PLEG): container finished" podID="ace92610-3282-430c-8a19-3efab25d27b1" containerID="2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2" exitCode=0 Nov 26 07:44:36 crc kubenswrapper[4909]: I1126 07:44:36.646156 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz2gs" event={"ID":"ace92610-3282-430c-8a19-3efab25d27b1","Type":"ContainerDied","Data":"2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2"} Nov 26 07:44:37 crc kubenswrapper[4909]: I1126 07:44:37.656825 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz2gs" event={"ID":"ace92610-3282-430c-8a19-3efab25d27b1","Type":"ContainerStarted","Data":"6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32"} Nov 26 07:44:37 crc kubenswrapper[4909]: I1126 07:44:37.693712 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gz2gs" podStartSLOduration=2.295291472 podStartE2EDuration="3.693692246s" podCreationTimestamp="2025-11-26 07:44:34 +0000 UTC" firstStartedPulling="2025-11-26 07:44:35.638524156 +0000 UTC m=+2647.784735332" lastFinishedPulling="2025-11-26 07:44:37.03692494 +0000 UTC m=+2649.183136106" observedRunningTime="2025-11-26 07:44:37.685584475 +0000 UTC m=+2649.831795651" watchObservedRunningTime="2025-11-26 07:44:37.693692246 +0000 UTC m=+2649.839903402" Nov 26 07:44:44 crc kubenswrapper[4909]: I1126 07:44:44.834769 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:44 crc kubenswrapper[4909]: I1126 07:44:44.835092 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:44 crc kubenswrapper[4909]: I1126 07:44:44.884002 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:45 crc kubenswrapper[4909]: I1126 07:44:45.770046 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:46 crc kubenswrapper[4909]: I1126 07:44:46.516685 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz2gs"] Nov 26 07:44:47 crc kubenswrapper[4909]: I1126 07:44:47.741068 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gz2gs" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="registry-server" containerID="cri-o://6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32" gracePeriod=2 Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.199978 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.278790 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-utilities\") pod \"ace92610-3282-430c-8a19-3efab25d27b1\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.278944 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-catalog-content\") pod \"ace92610-3282-430c-8a19-3efab25d27b1\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.278972 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd25c\" (UniqueName: \"kubernetes.io/projected/ace92610-3282-430c-8a19-3efab25d27b1-kube-api-access-rd25c\") pod \"ace92610-3282-430c-8a19-3efab25d27b1\" (UID: \"ace92610-3282-430c-8a19-3efab25d27b1\") " Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.279882 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-utilities" (OuterVolumeSpecName: "utilities") pod "ace92610-3282-430c-8a19-3efab25d27b1" (UID: "ace92610-3282-430c-8a19-3efab25d27b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.284747 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace92610-3282-430c-8a19-3efab25d27b1-kube-api-access-rd25c" (OuterVolumeSpecName: "kube-api-access-rd25c") pod "ace92610-3282-430c-8a19-3efab25d27b1" (UID: "ace92610-3282-430c-8a19-3efab25d27b1"). InnerVolumeSpecName "kube-api-access-rd25c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.301379 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ace92610-3282-430c-8a19-3efab25d27b1" (UID: "ace92610-3282-430c-8a19-3efab25d27b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.379949 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd25c\" (UniqueName: \"kubernetes.io/projected/ace92610-3282-430c-8a19-3efab25d27b1-kube-api-access-rd25c\") on node \"crc\" DevicePath \"\"" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.380005 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.380017 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace92610-3282-430c-8a19-3efab25d27b1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.755148 4909 generic.go:334] "Generic (PLEG): container finished" podID="ace92610-3282-430c-8a19-3efab25d27b1" containerID="6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32" exitCode=0 Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.755208 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz2gs" event={"ID":"ace92610-3282-430c-8a19-3efab25d27b1","Type":"ContainerDied","Data":"6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32"} Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.755246 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz2gs" event={"ID":"ace92610-3282-430c-8a19-3efab25d27b1","Type":"ContainerDied","Data":"3258f1b30c279e7688bc8f7376477029cddfccd92b7d8cf97fdfea6725d9b3b4"} Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.755269 4909 scope.go:117] "RemoveContainer" containerID="6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.755803 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz2gs" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.786629 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz2gs"] Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.790515 4909 scope.go:117] "RemoveContainer" containerID="2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.812369 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz2gs"] Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.827006 4909 scope.go:117] "RemoveContainer" containerID="912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.853655 4909 scope.go:117] "RemoveContainer" containerID="6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32" Nov 26 07:44:48 crc kubenswrapper[4909]: E1126 07:44:48.854063 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32\": container with ID starting with 6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32 not found: ID does not exist" containerID="6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.854116 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32"} err="failed to get container status \"6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32\": rpc error: code = NotFound desc = could not find container \"6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32\": container with ID starting with 6b7e845ac3628d9ab22fb21df75be14786a08d6a71acdf487d67942bbd422c32 not found: ID does not exist" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.854148 4909 scope.go:117] "RemoveContainer" containerID="2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2" Nov 26 07:44:48 crc kubenswrapper[4909]: E1126 07:44:48.854537 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2\": container with ID starting with 2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2 not found: ID does not exist" containerID="2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.854560 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2"} err="failed to get container status \"2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2\": rpc error: code = NotFound desc = could not find container \"2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2\": container with ID starting with 2c0b28e253c253606943781651bb4fcda9b61a0b726a127dc233bbc29c021bd2 not found: ID does not exist" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.854576 4909 scope.go:117] "RemoveContainer" containerID="912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c" Nov 26 07:44:48 crc kubenswrapper[4909]: E1126 07:44:48.855032 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c\": container with ID starting with 912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c not found: ID does not exist" containerID="912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c" Nov 26 07:44:48 crc kubenswrapper[4909]: I1126 07:44:48.855068 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c"} err="failed to get container status \"912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c\": rpc error: code = NotFound desc = could not find container \"912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c\": container with ID starting with 912fee84ff7be788e36e0e4b98600df90efc71ed3c577370b3755e175d81a43c not found: ID does not exist" Nov 26 07:44:50 crc kubenswrapper[4909]: I1126 07:44:50.509504 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace92610-3282-430c-8a19-3efab25d27b1" path="/var/lib/kubelet/pods/ace92610-3282-430c-8a19-3efab25d27b1/volumes" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.150986 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc"] Nov 26 07:45:00 crc kubenswrapper[4909]: E1126 07:45:00.151949 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="registry-server" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.151965 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="registry-server" Nov 26 07:45:00 crc kubenswrapper[4909]: E1126 07:45:00.151984 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="extract-utilities" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.151992 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="extract-utilities" Nov 26 07:45:00 crc kubenswrapper[4909]: E1126 07:45:00.152020 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="extract-content" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.152031 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="extract-content" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.152215 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace92610-3282-430c-8a19-3efab25d27b1" containerName="registry-server" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.152882 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.160513 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.161505 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.162475 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc"] Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.255726 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wcfc\" (UniqueName: \"kubernetes.io/projected/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-kube-api-access-6wcfc\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.255867 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-config-volume\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.255909 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-secret-volume\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.358647 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-config-volume\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.358766 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-secret-volume\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.358843 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wcfc\" (UniqueName: \"kubernetes.io/projected/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-kube-api-access-6wcfc\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.359994 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-config-volume\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.368783 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-secret-volume\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.387806 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wcfc\" (UniqueName: \"kubernetes.io/projected/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-kube-api-access-6wcfc\") pod \"collect-profiles-29402385-kx5qc\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.475000 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:00 crc kubenswrapper[4909]: I1126 07:45:00.903722 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc"] Nov 26 07:45:01 crc kubenswrapper[4909]: I1126 07:45:01.037176 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" event={"ID":"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843","Type":"ContainerStarted","Data":"d5169245df90404a4ca904cb931533be13f87324f403e3bbb83f1dbb6f354052"} Nov 26 07:45:02 crc kubenswrapper[4909]: I1126 07:45:02.047583 4909 generic.go:334] "Generic (PLEG): container finished" podID="7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" containerID="3a8d6e779cb8f4e23370b18e7fb2a9c37bd5cdd17703f5439e9540db108a6783" exitCode=0 Nov 26 07:45:02 crc kubenswrapper[4909]: I1126 07:45:02.047656 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" event={"ID":"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843","Type":"ContainerDied","Data":"3a8d6e779cb8f4e23370b18e7fb2a9c37bd5cdd17703f5439e9540db108a6783"} Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.323710 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.506480 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-secret-volume\") pod \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.506556 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wcfc\" (UniqueName: \"kubernetes.io/projected/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-kube-api-access-6wcfc\") pod \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.506622 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-config-volume\") pod \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\" (UID: \"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843\") " Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.507422 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-config-volume" (OuterVolumeSpecName: "config-volume") pod "7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" (UID: "7fe923a1-15bb-4e6d-bb3d-5eecfd55f843"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.511426 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" (UID: "7fe923a1-15bb-4e6d-bb3d-5eecfd55f843"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.511455 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-kube-api-access-6wcfc" (OuterVolumeSpecName: "kube-api-access-6wcfc") pod "7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" (UID: "7fe923a1-15bb-4e6d-bb3d-5eecfd55f843"). InnerVolumeSpecName "kube-api-access-6wcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.608574 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.608637 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wcfc\" (UniqueName: \"kubernetes.io/projected/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-kube-api-access-6wcfc\") on node \"crc\" DevicePath \"\"" Nov 26 07:45:03 crc kubenswrapper[4909]: I1126 07:45:03.608651 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 07:45:04 crc kubenswrapper[4909]: I1126 07:45:04.064299 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" event={"ID":"7fe923a1-15bb-4e6d-bb3d-5eecfd55f843","Type":"ContainerDied","Data":"d5169245df90404a4ca904cb931533be13f87324f403e3bbb83f1dbb6f354052"} Nov 26 07:45:04 crc kubenswrapper[4909]: I1126 07:45:04.064372 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5169245df90404a4ca904cb931533be13f87324f403e3bbb83f1dbb6f354052" Nov 26 07:45:04 crc kubenswrapper[4909]: I1126 07:45:04.064373 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc" Nov 26 07:45:04 crc kubenswrapper[4909]: I1126 07:45:04.403963 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7"] Nov 26 07:45:04 crc kubenswrapper[4909]: I1126 07:45:04.412906 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402340-xfxg7"] Nov 26 07:45:04 crc kubenswrapper[4909]: I1126 07:45:04.508665 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9" path="/var/lib/kubelet/pods/adf9b4b3-2b3d-47bc-ace7-03ba1cc10cb9/volumes" Nov 26 07:45:07 crc kubenswrapper[4909]: I1126 07:45:07.300676 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:45:07 crc kubenswrapper[4909]: I1126 07:45:07.301015 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:45:35 crc kubenswrapper[4909]: I1126 07:45:35.827684 4909 scope.go:117] "RemoveContainer" containerID="a882b1203af71bfb2ec37035497eae2b02c4970319eea5ad4c4e0321d8ead8ba" Nov 26 07:45:37 crc kubenswrapper[4909]: I1126 07:45:37.301427 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:45:37 crc kubenswrapper[4909]: I1126 07:45:37.301771 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.894639 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prbzs"] Nov 26 07:46:03 crc kubenswrapper[4909]: E1126 07:46:03.896798 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" containerName="collect-profiles" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.896927 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" containerName="collect-profiles" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.897207 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" containerName="collect-profiles" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.898741 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.905154 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-catalog-content\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.905284 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4r4\" (UniqueName: \"kubernetes.io/projected/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-kube-api-access-jt4r4\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.905337 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-utilities\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:03 crc kubenswrapper[4909]: I1126 07:46:03.920084 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prbzs"] Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.006687 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-catalog-content\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.006763 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt4r4\" (UniqueName: \"kubernetes.io/projected/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-kube-api-access-jt4r4\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.006791 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-utilities\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.007340 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-catalog-content\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.007410 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-utilities\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.035475 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt4r4\" (UniqueName: \"kubernetes.io/projected/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-kube-api-access-jt4r4\") pod \"certified-operators-prbzs\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.232500 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:04 crc kubenswrapper[4909]: I1126 07:46:04.706697 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prbzs"] Nov 26 07:46:05 crc kubenswrapper[4909]: I1126 07:46:05.634634 4909 generic.go:334] "Generic (PLEG): container finished" podID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerID="9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd" exitCode=0 Nov 26 07:46:05 crc kubenswrapper[4909]: I1126 07:46:05.634696 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prbzs" event={"ID":"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2","Type":"ContainerDied","Data":"9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd"} Nov 26 07:46:05 crc kubenswrapper[4909]: I1126 07:46:05.634734 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prbzs" event={"ID":"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2","Type":"ContainerStarted","Data":"6c010b42f191d8c002ff9c3daa0a1609714ab9503bf960a585abba7e34ca31e6"} Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.301097 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.301164 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.301214 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.302353 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa2dc649f26bd2e1c0e531ea6c4de05ff32b5c2300a83eefa8dd6a1922d306b6"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.302415 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://aa2dc649f26bd2e1c0e531ea6c4de05ff32b5c2300a83eefa8dd6a1922d306b6" gracePeriod=600 Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.656644 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="aa2dc649f26bd2e1c0e531ea6c4de05ff32b5c2300a83eefa8dd6a1922d306b6" exitCode=0 Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.656694 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"aa2dc649f26bd2e1c0e531ea6c4de05ff32b5c2300a83eefa8dd6a1922d306b6"} Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.657115 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5"} Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.657176 4909 scope.go:117] "RemoveContainer" containerID="400f6cc6f0b14929ca0c4e7c4a54e211b4096a1b22e16e72be110c88d478dd52" Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.661135 4909 generic.go:334] "Generic (PLEG): container finished" podID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerID="1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58" exitCode=0 Nov 26 07:46:07 crc kubenswrapper[4909]: I1126 07:46:07.661233 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prbzs" event={"ID":"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2","Type":"ContainerDied","Data":"1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58"} Nov 26 07:46:08 crc kubenswrapper[4909]: I1126 07:46:08.674079 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prbzs" event={"ID":"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2","Type":"ContainerStarted","Data":"b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4"} Nov 26 07:46:08 crc kubenswrapper[4909]: I1126 07:46:08.696619 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prbzs" podStartSLOduration=3.2097897619999998 podStartE2EDuration="5.696600706s" podCreationTimestamp="2025-11-26 07:46:03 +0000 UTC" firstStartedPulling="2025-11-26 07:46:05.636692675 +0000 UTC m=+2737.782903881" lastFinishedPulling="2025-11-26 07:46:08.123503659 +0000 UTC m=+2740.269714825" observedRunningTime="2025-11-26 07:46:08.691361284 +0000 UTC m=+2740.837572460" watchObservedRunningTime="2025-11-26 07:46:08.696600706 +0000 UTC m=+2740.842811872" Nov 26 07:46:14 crc kubenswrapper[4909]: I1126 07:46:14.233119 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:14 crc kubenswrapper[4909]: I1126 07:46:14.233718 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:14 crc kubenswrapper[4909]: I1126 07:46:14.299427 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:14 crc kubenswrapper[4909]: I1126 07:46:14.792703 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:14 crc kubenswrapper[4909]: I1126 07:46:14.869110 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prbzs"] Nov 26 07:46:16 crc kubenswrapper[4909]: I1126 07:46:16.742869 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prbzs" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="registry-server" containerID="cri-o://b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4" gracePeriod=2 Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.136838 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.300955 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-utilities\") pod \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.301115 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt4r4\" (UniqueName: \"kubernetes.io/projected/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-kube-api-access-jt4r4\") pod \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.301298 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-catalog-content\") pod \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\" (UID: \"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2\") " Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.302012 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-utilities" (OuterVolumeSpecName: "utilities") pod "1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" (UID: "1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.311905 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-kube-api-access-jt4r4" (OuterVolumeSpecName: "kube-api-access-jt4r4") pod "1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" (UID: "1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2"). InnerVolumeSpecName "kube-api-access-jt4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.403075 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.403136 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt4r4\" (UniqueName: \"kubernetes.io/projected/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-kube-api-access-jt4r4\") on node \"crc\" DevicePath \"\"" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.593248 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" (UID: "1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.607022 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.757686 4909 generic.go:334] "Generic (PLEG): container finished" podID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerID="b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4" exitCode=0 Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.757749 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prbzs" event={"ID":"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2","Type":"ContainerDied","Data":"b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4"} Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.757788 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prbzs" event={"ID":"1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2","Type":"ContainerDied","Data":"6c010b42f191d8c002ff9c3daa0a1609714ab9503bf960a585abba7e34ca31e6"} Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.757820 4909 scope.go:117] "RemoveContainer" containerID="b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.757854 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prbzs" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.793092 4909 scope.go:117] "RemoveContainer" containerID="1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.810232 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prbzs"] Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.816808 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prbzs"] Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.846985 4909 scope.go:117] "RemoveContainer" containerID="9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.875050 4909 scope.go:117] "RemoveContainer" containerID="b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4" Nov 26 07:46:17 crc kubenswrapper[4909]: E1126 07:46:17.875568 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4\": container with ID starting with b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4 not found: ID does not exist" containerID="b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.875649 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4"} err="failed to get container status \"b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4\": rpc error: code = NotFound desc = could not find container \"b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4\": container with ID starting with b6eb7274f49738a112ed0d2686e7b2a4d1bff514aadcda6cea4b4508032221e4 not found: ID does not exist" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.875672 4909 scope.go:117] "RemoveContainer" containerID="1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58" Nov 26 07:46:17 crc kubenswrapper[4909]: E1126 07:46:17.876036 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58\": container with ID starting with 1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58 not found: ID does not exist" containerID="1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.876055 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58"} err="failed to get container status \"1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58\": rpc error: code = NotFound desc = could not find container \"1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58\": container with ID starting with 1e827a023a9d98f8dd7612387bdd1edf3db141891f76f3b0d6164639cb18ca58 not found: ID does not exist" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.876067 4909 scope.go:117] "RemoveContainer" containerID="9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd" Nov 26 07:46:17 crc kubenswrapper[4909]: E1126 07:46:17.876388 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd\": container with ID starting with 9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd not found: ID does not exist" containerID="9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd" Nov 26 07:46:17 crc kubenswrapper[4909]: I1126 07:46:17.876431 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd"} err="failed to get container status \"9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd\": rpc error: code = NotFound desc = could not find container \"9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd\": container with ID starting with 9a1ec14093e548984f140fdb3b0a13f87a5eaad842770c5d5736b3ecb02925bd not found: ID does not exist" Nov 26 07:46:18 crc kubenswrapper[4909]: I1126 07:46:18.511736 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" path="/var/lib/kubelet/pods/1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2/volumes" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.430586 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqhn8"] Nov 26 07:46:20 crc kubenswrapper[4909]: E1126 07:46:20.431227 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="extract-utilities" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.431243 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="extract-utilities" Nov 26 07:46:20 crc kubenswrapper[4909]: E1126 07:46:20.431264 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="extract-content" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.431273 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="extract-content" Nov 26 07:46:20 crc kubenswrapper[4909]: E1126 07:46:20.431313 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="registry-server" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.431322 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="registry-server" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.431529 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7cd8d4-d556-4e2c-a8ca-c11ea9997fb2" containerName="registry-server" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.440858 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.469890 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqhn8"] Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.551897 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-utilities\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.552018 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-catalog-content\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.552161 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8lv\" (UniqueName: \"kubernetes.io/projected/dd253d60-79b5-469e-ab93-20a5c989ac35-kube-api-access-qt8lv\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.653326 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-catalog-content\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.653407 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt8lv\" (UniqueName: \"kubernetes.io/projected/dd253d60-79b5-469e-ab93-20a5c989ac35-kube-api-access-qt8lv\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.653514 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-utilities\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.653936 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-catalog-content\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.653986 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-utilities\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.680540 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt8lv\" (UniqueName: \"kubernetes.io/projected/dd253d60-79b5-469e-ab93-20a5c989ac35-kube-api-access-qt8lv\") pod \"redhat-operators-gqhn8\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:20 crc kubenswrapper[4909]: I1126 07:46:20.781494 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:21 crc kubenswrapper[4909]: I1126 07:46:21.202109 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqhn8"] Nov 26 07:46:21 crc kubenswrapper[4909]: W1126 07:46:21.205948 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd253d60_79b5_469e_ab93_20a5c989ac35.slice/crio-7c65449f8ceb8dc1ab609c2cde853602590b04cc992c796cabc02c0711537592 WatchSource:0}: Error finding container 7c65449f8ceb8dc1ab609c2cde853602590b04cc992c796cabc02c0711537592: Status 404 returned error can't find the container with id 7c65449f8ceb8dc1ab609c2cde853602590b04cc992c796cabc02c0711537592 Nov 26 07:46:21 crc kubenswrapper[4909]: I1126 07:46:21.793459 4909 generic.go:334] "Generic (PLEG): container finished" podID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerID="dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341" exitCode=0 Nov 26 07:46:21 crc kubenswrapper[4909]: I1126 07:46:21.793515 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqhn8" event={"ID":"dd253d60-79b5-469e-ab93-20a5c989ac35","Type":"ContainerDied","Data":"dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341"} Nov 26 07:46:21 crc kubenswrapper[4909]: I1126 07:46:21.793780 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqhn8" event={"ID":"dd253d60-79b5-469e-ab93-20a5c989ac35","Type":"ContainerStarted","Data":"7c65449f8ceb8dc1ab609c2cde853602590b04cc992c796cabc02c0711537592"} Nov 26 07:46:22 crc kubenswrapper[4909]: I1126 07:46:22.802316 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqhn8" event={"ID":"dd253d60-79b5-469e-ab93-20a5c989ac35","Type":"ContainerStarted","Data":"578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3"} Nov 26 07:46:23 crc kubenswrapper[4909]: I1126 07:46:23.813200 4909 generic.go:334] "Generic (PLEG): container finished" podID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerID="578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3" exitCode=0 Nov 26 07:46:23 crc kubenswrapper[4909]: I1126 07:46:23.813255 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqhn8" event={"ID":"dd253d60-79b5-469e-ab93-20a5c989ac35","Type":"ContainerDied","Data":"578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3"} Nov 26 07:46:24 crc kubenswrapper[4909]: I1126 07:46:24.822264 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqhn8" event={"ID":"dd253d60-79b5-469e-ab93-20a5c989ac35","Type":"ContainerStarted","Data":"0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2"} Nov 26 07:46:30 crc kubenswrapper[4909]: I1126 07:46:30.781875 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:30 crc kubenswrapper[4909]: I1126 07:46:30.782560 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:30 crc kubenswrapper[4909]: I1126 07:46:30.859499 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:30 crc kubenswrapper[4909]: I1126 07:46:30.888251 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqhn8" podStartSLOduration=8.213351304 podStartE2EDuration="10.88823175s" podCreationTimestamp="2025-11-26 07:46:20 +0000 UTC" firstStartedPulling="2025-11-26 07:46:21.794852556 +0000 UTC m=+2753.941063722" lastFinishedPulling="2025-11-26 07:46:24.469732992 +0000 UTC m=+2756.615944168" observedRunningTime="2025-11-26 07:46:24.840004966 +0000 UTC m=+2756.986216132" watchObservedRunningTime="2025-11-26 07:46:30.88823175 +0000 UTC m=+2763.034442926" Nov 26 07:46:30 crc kubenswrapper[4909]: I1126 07:46:30.920701 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:31 crc kubenswrapper[4909]: I1126 07:46:31.096963 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqhn8"] Nov 26 07:46:32 crc kubenswrapper[4909]: I1126 07:46:32.896463 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqhn8" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="registry-server" containerID="cri-o://0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2" gracePeriod=2 Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.309977 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.447209 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt8lv\" (UniqueName: \"kubernetes.io/projected/dd253d60-79b5-469e-ab93-20a5c989ac35-kube-api-access-qt8lv\") pod \"dd253d60-79b5-469e-ab93-20a5c989ac35\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.447263 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-utilities\") pod \"dd253d60-79b5-469e-ab93-20a5c989ac35\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.447395 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-catalog-content\") pod \"dd253d60-79b5-469e-ab93-20a5c989ac35\" (UID: \"dd253d60-79b5-469e-ab93-20a5c989ac35\") " Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.448801 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-utilities" (OuterVolumeSpecName: "utilities") pod "dd253d60-79b5-469e-ab93-20a5c989ac35" (UID: "dd253d60-79b5-469e-ab93-20a5c989ac35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.455369 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd253d60-79b5-469e-ab93-20a5c989ac35-kube-api-access-qt8lv" (OuterVolumeSpecName: "kube-api-access-qt8lv") pod "dd253d60-79b5-469e-ab93-20a5c989ac35" (UID: "dd253d60-79b5-469e-ab93-20a5c989ac35"). InnerVolumeSpecName "kube-api-access-qt8lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.548579 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt8lv\" (UniqueName: \"kubernetes.io/projected/dd253d60-79b5-469e-ab93-20a5c989ac35-kube-api-access-qt8lv\") on node \"crc\" DevicePath \"\"" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.548685 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.914182 4909 generic.go:334] "Generic (PLEG): container finished" podID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerID="0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2" exitCode=0 Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.914231 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqhn8" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.914240 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqhn8" event={"ID":"dd253d60-79b5-469e-ab93-20a5c989ac35","Type":"ContainerDied","Data":"0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2"} Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.914454 4909 scope.go:117] "RemoveContainer" containerID="0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.914741 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqhn8" event={"ID":"dd253d60-79b5-469e-ab93-20a5c989ac35","Type":"ContainerDied","Data":"7c65449f8ceb8dc1ab609c2cde853602590b04cc992c796cabc02c0711537592"} Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.936984 4909 scope.go:117] "RemoveContainer" containerID="578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.955301 4909 scope.go:117] "RemoveContainer" containerID="dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.991976 4909 scope.go:117] "RemoveContainer" containerID="0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2" Nov 26 07:46:33 crc kubenswrapper[4909]: E1126 07:46:33.992543 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2\": container with ID starting with 0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2 not found: ID does not exist" containerID="0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.992626 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2"} err="failed to get container status \"0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2\": rpc error: code = NotFound desc = could not find container \"0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2\": container with ID starting with 0592b43959615b84f1624c193e0484a6681e493cb45e25689d062850322775a2 not found: ID does not exist" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.992655 4909 scope.go:117] "RemoveContainer" containerID="578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3" Nov 26 07:46:33 crc kubenswrapper[4909]: E1126 07:46:33.993053 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3\": container with ID starting with 578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3 not found: ID does not exist" containerID="578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.993079 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3"} err="failed to get container status \"578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3\": rpc error: code = NotFound desc = could not find container \"578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3\": container with ID starting with 578746a41f9a312dad8ae5cb809c2f06249ce3050229dc4fc9f866b474724ea3 not found: ID does not exist" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.993103 4909 scope.go:117] "RemoveContainer" containerID="dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341" Nov 26 07:46:33 crc kubenswrapper[4909]: E1126 07:46:33.993293 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341\": container with ID starting with dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341 not found: ID does not exist" containerID="dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341" Nov 26 07:46:33 crc kubenswrapper[4909]: I1126 07:46:33.993322 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341"} err="failed to get container status \"dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341\": rpc error: code = NotFound desc = could not find container \"dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341\": container with ID starting with dda3bfa4b840daecf96e3b063a3ff67fdd3f115f51ead12302388e8818308341 not found: ID does not exist" Nov 26 07:46:34 crc kubenswrapper[4909]: I1126 07:46:34.773303 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd253d60-79b5-469e-ab93-20a5c989ac35" (UID: "dd253d60-79b5-469e-ab93-20a5c989ac35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:46:34 crc kubenswrapper[4909]: I1126 07:46:34.852535 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqhn8"] Nov 26 07:46:34 crc kubenswrapper[4909]: I1126 07:46:34.859078 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqhn8"] Nov 26 07:46:34 crc kubenswrapper[4909]: I1126 07:46:34.868761 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253d60-79b5-469e-ab93-20a5c989ac35-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:46:36 crc kubenswrapper[4909]: I1126 07:46:36.510789 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" path="/var/lib/kubelet/pods/dd253d60-79b5-469e-ab93-20a5c989ac35/volumes" Nov 26 07:48:07 crc kubenswrapper[4909]: I1126 07:48:07.301029 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:48:07 crc kubenswrapper[4909]: I1126 07:48:07.301615 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:48:37 crc kubenswrapper[4909]: I1126 07:48:37.301275 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:48:37 crc kubenswrapper[4909]: I1126 07:48:37.301890 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:49:07 crc kubenswrapper[4909]: I1126 07:49:07.301108 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:49:07 crc kubenswrapper[4909]: I1126 07:49:07.301687 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:49:07 crc kubenswrapper[4909]: I1126 07:49:07.301741 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:49:07 crc kubenswrapper[4909]: I1126 07:49:07.302399 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:49:07 crc kubenswrapper[4909]: I1126 07:49:07.302488 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" gracePeriod=600 Nov 26 07:49:07 crc kubenswrapper[4909]: E1126 07:49:07.441562 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:49:08 crc kubenswrapper[4909]: I1126 07:49:08.204825 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" exitCode=0 Nov 26 07:49:08 crc kubenswrapper[4909]: I1126 07:49:08.204882 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5"} Nov 26 07:49:08 crc kubenswrapper[4909]: I1126 07:49:08.204921 4909 scope.go:117] "RemoveContainer" containerID="aa2dc649f26bd2e1c0e531ea6c4de05ff32b5c2300a83eefa8dd6a1922d306b6" Nov 26 07:49:08 crc kubenswrapper[4909]: I1126 07:49:08.205507 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:49:08 crc kubenswrapper[4909]: E1126 07:49:08.205895 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:49:21 crc kubenswrapper[4909]: I1126 07:49:21.499105 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:49:21 crc kubenswrapper[4909]: E1126 07:49:21.499876 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:49:34 crc kubenswrapper[4909]: I1126 07:49:34.499233 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:49:34 crc kubenswrapper[4909]: E1126 07:49:34.499964 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:49:45 crc kubenswrapper[4909]: I1126 07:49:45.499269 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:49:45 crc kubenswrapper[4909]: E1126 07:49:45.499937 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:50:00 crc kubenswrapper[4909]: I1126 07:50:00.499107 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:50:00 crc kubenswrapper[4909]: E1126 07:50:00.499983 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:50:14 crc kubenswrapper[4909]: I1126 07:50:14.499525 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:50:14 crc kubenswrapper[4909]: E1126 07:50:14.500742 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:50:29 crc kubenswrapper[4909]: I1126 07:50:29.500070 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:50:29 crc kubenswrapper[4909]: E1126 07:50:29.500848 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:50:40 crc kubenswrapper[4909]: I1126 07:50:40.498631 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:50:40 crc kubenswrapper[4909]: E1126 07:50:40.499677 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:50:53 crc kubenswrapper[4909]: I1126 07:50:53.500715 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:50:53 crc kubenswrapper[4909]: E1126 07:50:53.502549 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:51:06 crc kubenswrapper[4909]: I1126 07:51:06.499378 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:51:06 crc kubenswrapper[4909]: E1126 07:51:06.499911 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:51:18 crc kubenswrapper[4909]: I1126 07:51:18.502947 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:51:18 crc kubenswrapper[4909]: E1126 07:51:18.503449 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:51:33 crc kubenswrapper[4909]: I1126 07:51:33.498802 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:51:33 crc kubenswrapper[4909]: E1126 07:51:33.500440 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:51:45 crc kubenswrapper[4909]: I1126 07:51:45.499551 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:51:45 crc kubenswrapper[4909]: E1126 07:51:45.500292 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:51:59 crc kubenswrapper[4909]: I1126 07:51:59.499079 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:51:59 crc kubenswrapper[4909]: E1126 07:51:59.499963 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:52:11 crc kubenswrapper[4909]: I1126 07:52:11.499110 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:52:11 crc kubenswrapper[4909]: E1126 07:52:11.500039 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:52:24 crc kubenswrapper[4909]: I1126 07:52:24.498439 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:52:24 crc kubenswrapper[4909]: E1126 07:52:24.499183 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:52:37 crc kubenswrapper[4909]: I1126 07:52:37.499472 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:52:37 crc kubenswrapper[4909]: E1126 07:52:37.500466 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:52:49 crc kubenswrapper[4909]: I1126 07:52:49.499046 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:52:49 crc kubenswrapper[4909]: E1126 07:52:49.500003 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:53:02 crc kubenswrapper[4909]: I1126 07:53:02.499997 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:53:02 crc kubenswrapper[4909]: E1126 07:53:02.501067 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:53:14 crc kubenswrapper[4909]: I1126 07:53:14.500252 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:53:14 crc kubenswrapper[4909]: E1126 07:53:14.502380 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:53:28 crc kubenswrapper[4909]: I1126 07:53:28.503103 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:53:28 crc kubenswrapper[4909]: E1126 07:53:28.503956 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:53:40 crc kubenswrapper[4909]: I1126 07:53:40.498961 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:53:40 crc kubenswrapper[4909]: E1126 07:53:40.500871 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:53:53 crc kubenswrapper[4909]: I1126 07:53:53.499456 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:53:53 crc kubenswrapper[4909]: E1126 07:53:53.500333 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 07:54:07 crc kubenswrapper[4909]: I1126 07:54:07.498945 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:54:08 crc kubenswrapper[4909]: I1126 07:54:08.447578 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"0077080d92dede8186cc2713354944a03a3d849e9c3d67ed508f9cb7a8cc9449"} Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.215462 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bcl8s"] Nov 26 07:55:28 crc kubenswrapper[4909]: E1126 07:55:28.216377 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="extract-content" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.216392 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="extract-content" Nov 26 07:55:28 crc kubenswrapper[4909]: E1126 07:55:28.216420 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="registry-server" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.216427 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="registry-server" Nov 26 07:55:28 crc kubenswrapper[4909]: E1126 07:55:28.216449 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="extract-utilities" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.216459 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="extract-utilities" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.216679 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd253d60-79b5-469e-ab93-20a5c989ac35" containerName="registry-server" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.217935 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.222201 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcl8s"] Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.382007 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-catalog-content\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.382079 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-utilities\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.382182 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqv8d\" (UniqueName: \"kubernetes.io/projected/e9d50f89-afc6-49c9-8f21-ab80df66f719-kube-api-access-zqv8d\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.483123 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqv8d\" (UniqueName: \"kubernetes.io/projected/e9d50f89-afc6-49c9-8f21-ab80df66f719-kube-api-access-zqv8d\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.483205 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-catalog-content\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.483241 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-utilities\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.483712 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-catalog-content\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.483797 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-utilities\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.507699 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqv8d\" (UniqueName: \"kubernetes.io/projected/e9d50f89-afc6-49c9-8f21-ab80df66f719-kube-api-access-zqv8d\") pod \"redhat-marketplace-bcl8s\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.589781 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:28 crc kubenswrapper[4909]: I1126 07:55:28.802902 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcl8s"] Nov 26 07:55:29 crc kubenswrapper[4909]: I1126 07:55:29.090303 4909 generic.go:334] "Generic (PLEG): container finished" podID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerID="a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc" exitCode=0 Nov 26 07:55:29 crc kubenswrapper[4909]: I1126 07:55:29.090345 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcl8s" event={"ID":"e9d50f89-afc6-49c9-8f21-ab80df66f719","Type":"ContainerDied","Data":"a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc"} Nov 26 07:55:29 crc kubenswrapper[4909]: I1126 07:55:29.090368 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcl8s" event={"ID":"e9d50f89-afc6-49c9-8f21-ab80df66f719","Type":"ContainerStarted","Data":"7c1a09038943ecd12c6df7fdb4456dd6910a9218a0d431b84743e8984ff1bc77"} Nov 26 07:55:30 crc kubenswrapper[4909]: I1126 07:55:30.098892 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 07:55:31 crc kubenswrapper[4909]: I1126 07:55:31.103321 4909 generic.go:334] "Generic (PLEG): container finished" podID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerID="b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f" exitCode=0 Nov 26 07:55:31 crc kubenswrapper[4909]: I1126 07:55:31.103400 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcl8s" event={"ID":"e9d50f89-afc6-49c9-8f21-ab80df66f719","Type":"ContainerDied","Data":"b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f"} Nov 26 07:55:31 crc kubenswrapper[4909]: E1126 07:55:31.132334 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d50f89_afc6_49c9_8f21_ab80df66f719.slice/crio-b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f.scope\": RecentStats: unable to find data in memory cache]" Nov 26 07:55:32 crc kubenswrapper[4909]: I1126 07:55:32.112470 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcl8s" event={"ID":"e9d50f89-afc6-49c9-8f21-ab80df66f719","Type":"ContainerStarted","Data":"86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593"} Nov 26 07:55:32 crc kubenswrapper[4909]: I1126 07:55:32.129851 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bcl8s" podStartSLOduration=2.68154351 podStartE2EDuration="4.129833781s" podCreationTimestamp="2025-11-26 07:55:28 +0000 UTC" firstStartedPulling="2025-11-26 07:55:30.09866971 +0000 UTC m=+3302.244880876" lastFinishedPulling="2025-11-26 07:55:31.546959981 +0000 UTC m=+3303.693171147" observedRunningTime="2025-11-26 07:55:32.12942634 +0000 UTC m=+3304.275637506" watchObservedRunningTime="2025-11-26 07:55:32.129833781 +0000 UTC m=+3304.276044947" Nov 26 07:55:38 crc kubenswrapper[4909]: I1126 07:55:38.590341 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:38 crc kubenswrapper[4909]: I1126 07:55:38.590851 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:38 crc kubenswrapper[4909]: I1126 07:55:38.657328 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:39 crc kubenswrapper[4909]: I1126 07:55:39.209582 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:39 crc kubenswrapper[4909]: I1126 07:55:39.256814 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcl8s"] Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.179879 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bcl8s" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="registry-server" containerID="cri-o://86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593" gracePeriod=2 Nov 26 07:55:41 crc kubenswrapper[4909]: E1126 07:55:41.397411 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d50f89_afc6_49c9_8f21_ab80df66f719.slice/crio-conmon-86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d50f89_afc6_49c9_8f21_ab80df66f719.slice/crio-86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593.scope\": RecentStats: unable to find data in memory cache]" Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.567455 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.689267 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-catalog-content\") pod \"e9d50f89-afc6-49c9-8f21-ab80df66f719\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.689337 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqv8d\" (UniqueName: \"kubernetes.io/projected/e9d50f89-afc6-49c9-8f21-ab80df66f719-kube-api-access-zqv8d\") pod \"e9d50f89-afc6-49c9-8f21-ab80df66f719\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.689377 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-utilities\") pod \"e9d50f89-afc6-49c9-8f21-ab80df66f719\" (UID: \"e9d50f89-afc6-49c9-8f21-ab80df66f719\") " Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.690814 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-utilities" (OuterVolumeSpecName: "utilities") pod "e9d50f89-afc6-49c9-8f21-ab80df66f719" (UID: "e9d50f89-afc6-49c9-8f21-ab80df66f719"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.702825 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d50f89-afc6-49c9-8f21-ab80df66f719-kube-api-access-zqv8d" (OuterVolumeSpecName: "kube-api-access-zqv8d") pod "e9d50f89-afc6-49c9-8f21-ab80df66f719" (UID: "e9d50f89-afc6-49c9-8f21-ab80df66f719"). InnerVolumeSpecName "kube-api-access-zqv8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.719336 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9d50f89-afc6-49c9-8f21-ab80df66f719" (UID: "e9d50f89-afc6-49c9-8f21-ab80df66f719"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.791283 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.791325 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqv8d\" (UniqueName: \"kubernetes.io/projected/e9d50f89-afc6-49c9-8f21-ab80df66f719-kube-api-access-zqv8d\") on node \"crc\" DevicePath \"\"" Nov 26 07:55:41 crc kubenswrapper[4909]: I1126 07:55:41.791338 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d50f89-afc6-49c9-8f21-ab80df66f719-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.189754 4909 generic.go:334] "Generic (PLEG): container finished" podID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerID="86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593" exitCode=0 Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.189820 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcl8s" event={"ID":"e9d50f89-afc6-49c9-8f21-ab80df66f719","Type":"ContainerDied","Data":"86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593"} Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.189848 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcl8s" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.189868 4909 scope.go:117] "RemoveContainer" containerID="86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.189854 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcl8s" event={"ID":"e9d50f89-afc6-49c9-8f21-ab80df66f719","Type":"ContainerDied","Data":"7c1a09038943ecd12c6df7fdb4456dd6910a9218a0d431b84743e8984ff1bc77"} Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.217494 4909 scope.go:117] "RemoveContainer" containerID="b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.221160 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcl8s"] Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.226680 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcl8s"] Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.246474 4909 scope.go:117] "RemoveContainer" containerID="a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.263876 4909 scope.go:117] "RemoveContainer" containerID="86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593" Nov 26 07:55:42 crc kubenswrapper[4909]: E1126 07:55:42.264317 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593\": container with ID starting with 86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593 not found: ID does not exist" containerID="86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.264351 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593"} err="failed to get container status \"86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593\": rpc error: code = NotFound desc = could not find container \"86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593\": container with ID starting with 86f414eb3f58e3be3018d63301b7b7fedefde2814ebaf0fa616cd646ed819593 not found: ID does not exist" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.264388 4909 scope.go:117] "RemoveContainer" containerID="b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f" Nov 26 07:55:42 crc kubenswrapper[4909]: E1126 07:55:42.264964 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f\": container with ID starting with b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f not found: ID does not exist" containerID="b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.265026 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f"} err="failed to get container status \"b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f\": rpc error: code = NotFound desc = could not find container \"b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f\": container with ID starting with b1d7266d03e41c884c8ff1a5191420a1dde05324bb82dbcdbdc51916b3b99d1f not found: ID does not exist" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.265060 4909 scope.go:117] "RemoveContainer" containerID="a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc" Nov 26 07:55:42 crc kubenswrapper[4909]: E1126 07:55:42.265434 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc\": container with ID starting with a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc not found: ID does not exist" containerID="a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.265463 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc"} err="failed to get container status \"a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc\": rpc error: code = NotFound desc = could not find container \"a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc\": container with ID starting with a7df5bd2154b271f74c211c76bf7daead8bb92d9f887e8ea0f0795806b9e87dc not found: ID does not exist" Nov 26 07:55:42 crc kubenswrapper[4909]: I1126 07:55:42.509117 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" path="/var/lib/kubelet/pods/e9d50f89-afc6-49c9-8f21-ab80df66f719/volumes" Nov 26 07:56:07 crc kubenswrapper[4909]: I1126 07:56:07.300736 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:56:07 crc kubenswrapper[4909]: I1126 07:56:07.301655 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.609312 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c76ht"] Nov 26 07:56:26 crc kubenswrapper[4909]: E1126 07:56:26.610125 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="extract-content" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.610138 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="extract-content" Nov 26 07:56:26 crc kubenswrapper[4909]: E1126 07:56:26.610164 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="extract-utilities" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.610172 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="extract-utilities" Nov 26 07:56:26 crc kubenswrapper[4909]: E1126 07:56:26.610190 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="registry-server" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.610198 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="registry-server" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.610369 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d50f89-afc6-49c9-8f21-ab80df66f719" containerName="registry-server" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.611492 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.632470 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c76ht"] Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.754136 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdckh\" (UniqueName: \"kubernetes.io/projected/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-kube-api-access-hdckh\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.754227 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-utilities\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.754495 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-catalog-content\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.855479 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-catalog-content\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.855574 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdckh\" (UniqueName: \"kubernetes.io/projected/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-kube-api-access-hdckh\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.855622 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-utilities\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.856027 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-utilities\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.856240 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-catalog-content\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.882555 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdckh\" (UniqueName: \"kubernetes.io/projected/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-kube-api-access-hdckh\") pod \"certified-operators-c76ht\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:26 crc kubenswrapper[4909]: I1126 07:56:26.964011 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:27 crc kubenswrapper[4909]: I1126 07:56:27.407651 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c76ht"] Nov 26 07:56:27 crc kubenswrapper[4909]: I1126 07:56:27.555273 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c76ht" event={"ID":"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264","Type":"ContainerStarted","Data":"dd012a0431b11592a63abde97990519258a60275d7bdca72487bd082d549a1be"} Nov 26 07:56:28 crc kubenswrapper[4909]: I1126 07:56:28.563806 4909 generic.go:334] "Generic (PLEG): container finished" podID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerID="eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9" exitCode=0 Nov 26 07:56:28 crc kubenswrapper[4909]: I1126 07:56:28.563913 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c76ht" event={"ID":"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264","Type":"ContainerDied","Data":"eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9"} Nov 26 07:56:29 crc kubenswrapper[4909]: I1126 07:56:29.576486 4909 generic.go:334] "Generic (PLEG): container finished" podID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerID="2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3" exitCode=0 Nov 26 07:56:29 crc kubenswrapper[4909]: I1126 07:56:29.576558 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c76ht" event={"ID":"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264","Type":"ContainerDied","Data":"2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3"} Nov 26 07:56:30 crc kubenswrapper[4909]: I1126 07:56:30.590776 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c76ht" event={"ID":"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264","Type":"ContainerStarted","Data":"0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4"} Nov 26 07:56:30 crc kubenswrapper[4909]: I1126 07:56:30.616752 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c76ht" podStartSLOduration=3.122906962 podStartE2EDuration="4.616727345s" podCreationTimestamp="2025-11-26 07:56:26 +0000 UTC" firstStartedPulling="2025-11-26 07:56:28.565799015 +0000 UTC m=+3360.712010181" lastFinishedPulling="2025-11-26 07:56:30.059619388 +0000 UTC m=+3362.205830564" observedRunningTime="2025-11-26 07:56:30.612200012 +0000 UTC m=+3362.758411228" watchObservedRunningTime="2025-11-26 07:56:30.616727345 +0000 UTC m=+3362.762938511" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.811553 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56w6l"] Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.813480 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.830198 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56w6l"] Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.872465 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snkvb\" (UniqueName: \"kubernetes.io/projected/31edad90-c809-4f65-ba1b-e946d4c4cf75-kube-api-access-snkvb\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.872538 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-utilities\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.872566 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-catalog-content\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.974263 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-utilities\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.974320 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-catalog-content\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.974368 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snkvb\" (UniqueName: \"kubernetes.io/projected/31edad90-c809-4f65-ba1b-e946d4c4cf75-kube-api-access-snkvb\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.975056 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-utilities\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.975082 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-catalog-content\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:34 crc kubenswrapper[4909]: I1126 07:56:34.998673 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snkvb\" (UniqueName: \"kubernetes.io/projected/31edad90-c809-4f65-ba1b-e946d4c4cf75-kube-api-access-snkvb\") pod \"redhat-operators-56w6l\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:35 crc kubenswrapper[4909]: I1126 07:56:35.141039 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:35 crc kubenswrapper[4909]: I1126 07:56:35.570431 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56w6l"] Nov 26 07:56:35 crc kubenswrapper[4909]: I1126 07:56:35.639244 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56w6l" event={"ID":"31edad90-c809-4f65-ba1b-e946d4c4cf75","Type":"ContainerStarted","Data":"4b5ce623b4817a6cf5e72bd90ac7e825fc024e929ff8111bfed9151c0e09e489"} Nov 26 07:56:36 crc kubenswrapper[4909]: I1126 07:56:36.650553 4909 generic.go:334] "Generic (PLEG): container finished" podID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerID="b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159" exitCode=0 Nov 26 07:56:36 crc kubenswrapper[4909]: I1126 07:56:36.650636 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56w6l" event={"ID":"31edad90-c809-4f65-ba1b-e946d4c4cf75","Type":"ContainerDied","Data":"b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159"} Nov 26 07:56:36 crc kubenswrapper[4909]: I1126 07:56:36.964780 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:36 crc kubenswrapper[4909]: I1126 07:56:36.964864 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:37 crc kubenswrapper[4909]: I1126 07:56:37.008773 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:37 crc kubenswrapper[4909]: I1126 07:56:37.300957 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:56:37 crc kubenswrapper[4909]: I1126 07:56:37.301445 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:56:37 crc kubenswrapper[4909]: I1126 07:56:37.662694 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56w6l" event={"ID":"31edad90-c809-4f65-ba1b-e946d4c4cf75","Type":"ContainerStarted","Data":"9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce"} Nov 26 07:56:37 crc kubenswrapper[4909]: I1126 07:56:37.705727 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:38 crc kubenswrapper[4909]: I1126 07:56:38.673097 4909 generic.go:334] "Generic (PLEG): container finished" podID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerID="9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce" exitCode=0 Nov 26 07:56:38 crc kubenswrapper[4909]: I1126 07:56:38.673175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56w6l" event={"ID":"31edad90-c809-4f65-ba1b-e946d4c4cf75","Type":"ContainerDied","Data":"9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce"} Nov 26 07:56:39 crc kubenswrapper[4909]: I1126 07:56:39.395515 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c76ht"] Nov 26 07:56:39 crc kubenswrapper[4909]: I1126 07:56:39.687097 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56w6l" event={"ID":"31edad90-c809-4f65-ba1b-e946d4c4cf75","Type":"ContainerStarted","Data":"5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd"} Nov 26 07:56:39 crc kubenswrapper[4909]: I1126 07:56:39.719923 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56w6l" podStartSLOduration=3.268791254 podStartE2EDuration="5.719901342s" podCreationTimestamp="2025-11-26 07:56:34 +0000 UTC" firstStartedPulling="2025-11-26 07:56:36.65494138 +0000 UTC m=+3368.801152556" lastFinishedPulling="2025-11-26 07:56:39.106051468 +0000 UTC m=+3371.252262644" observedRunningTime="2025-11-26 07:56:39.710968248 +0000 UTC m=+3371.857179434" watchObservedRunningTime="2025-11-26 07:56:39.719901342 +0000 UTC m=+3371.866112518" Nov 26 07:56:40 crc kubenswrapper[4909]: I1126 07:56:40.697073 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c76ht" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="registry-server" containerID="cri-o://0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4" gracePeriod=2 Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.183224 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.366777 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-catalog-content\") pod \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.367224 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdckh\" (UniqueName: \"kubernetes.io/projected/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-kube-api-access-hdckh\") pod \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.367253 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-utilities\") pod \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\" (UID: \"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264\") " Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.368376 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-utilities" (OuterVolumeSpecName: "utilities") pod "8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" (UID: "8e54c77f-caaa-4d5b-b9b0-c37efcfc7264"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.374191 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-kube-api-access-hdckh" (OuterVolumeSpecName: "kube-api-access-hdckh") pod "8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" (UID: "8e54c77f-caaa-4d5b-b9b0-c37efcfc7264"). InnerVolumeSpecName "kube-api-access-hdckh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.426117 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" (UID: "8e54c77f-caaa-4d5b-b9b0-c37efcfc7264"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.469418 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdckh\" (UniqueName: \"kubernetes.io/projected/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-kube-api-access-hdckh\") on node \"crc\" DevicePath \"\"" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.469453 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.469468 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.709242 4909 generic.go:334] "Generic (PLEG): container finished" podID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerID="0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4" exitCode=0 Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.709322 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c76ht" event={"ID":"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264","Type":"ContainerDied","Data":"0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4"} Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.709383 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c76ht" event={"ID":"8e54c77f-caaa-4d5b-b9b0-c37efcfc7264","Type":"ContainerDied","Data":"dd012a0431b11592a63abde97990519258a60275d7bdca72487bd082d549a1be"} Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.709404 4909 scope.go:117] "RemoveContainer" containerID="0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.709342 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c76ht" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.730690 4909 scope.go:117] "RemoveContainer" containerID="2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.748855 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c76ht"] Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.748906 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c76ht"] Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.771825 4909 scope.go:117] "RemoveContainer" containerID="eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.789644 4909 scope.go:117] "RemoveContainer" containerID="0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4" Nov 26 07:56:41 crc kubenswrapper[4909]: E1126 07:56:41.790134 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4\": container with ID starting with 0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4 not found: ID does not exist" containerID="0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.790169 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4"} err="failed to get container status \"0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4\": rpc error: code = NotFound desc = could not find container \"0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4\": container with ID starting with 0c108d92c7740bf1a899a8023e4289396ec205d8f72ab60c13984976e3631ad4 not found: ID does not exist" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.790191 4909 scope.go:117] "RemoveContainer" containerID="2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3" Nov 26 07:56:41 crc kubenswrapper[4909]: E1126 07:56:41.790608 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3\": container with ID starting with 2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3 not found: ID does not exist" containerID="2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.790685 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3"} err="failed to get container status \"2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3\": rpc error: code = NotFound desc = could not find container \"2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3\": container with ID starting with 2f4da0b1225f183c8e0308d62776265d6d7e6c955aaa8fe8e0221d2413966ba3 not found: ID does not exist" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.790716 4909 scope.go:117] "RemoveContainer" containerID="eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9" Nov 26 07:56:41 crc kubenswrapper[4909]: E1126 07:56:41.791145 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9\": container with ID starting with eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9 not found: ID does not exist" containerID="eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9" Nov 26 07:56:41 crc kubenswrapper[4909]: I1126 07:56:41.791182 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9"} err="failed to get container status \"eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9\": rpc error: code = NotFound desc = could not find container \"eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9\": container with ID starting with eeb8b2ae4e1b37edeb63ba5e52e9d7649d86e58cd7b0af52b65255a7eb79b0e9 not found: ID does not exist" Nov 26 07:56:42 crc kubenswrapper[4909]: I1126 07:56:42.514538 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" path="/var/lib/kubelet/pods/8e54c77f-caaa-4d5b-b9b0-c37efcfc7264/volumes" Nov 26 07:56:45 crc kubenswrapper[4909]: I1126 07:56:45.141656 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:45 crc kubenswrapper[4909]: I1126 07:56:45.141714 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:45 crc kubenswrapper[4909]: I1126 07:56:45.185691 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:45 crc kubenswrapper[4909]: I1126 07:56:45.835140 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:46 crc kubenswrapper[4909]: I1126 07:56:46.395123 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56w6l"] Nov 26 07:56:47 crc kubenswrapper[4909]: I1126 07:56:47.780021 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56w6l" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="registry-server" containerID="cri-o://5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd" gracePeriod=2 Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.231749 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.374129 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-catalog-content\") pod \"31edad90-c809-4f65-ba1b-e946d4c4cf75\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.374235 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snkvb\" (UniqueName: \"kubernetes.io/projected/31edad90-c809-4f65-ba1b-e946d4c4cf75-kube-api-access-snkvb\") pod \"31edad90-c809-4f65-ba1b-e946d4c4cf75\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.374376 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-utilities\") pod \"31edad90-c809-4f65-ba1b-e946d4c4cf75\" (UID: \"31edad90-c809-4f65-ba1b-e946d4c4cf75\") " Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.375434 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-utilities" (OuterVolumeSpecName: "utilities") pod "31edad90-c809-4f65-ba1b-e946d4c4cf75" (UID: "31edad90-c809-4f65-ba1b-e946d4c4cf75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.380585 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31edad90-c809-4f65-ba1b-e946d4c4cf75-kube-api-access-snkvb" (OuterVolumeSpecName: "kube-api-access-snkvb") pod "31edad90-c809-4f65-ba1b-e946d4c4cf75" (UID: "31edad90-c809-4f65-ba1b-e946d4c4cf75"). InnerVolumeSpecName "kube-api-access-snkvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.476208 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.476616 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snkvb\" (UniqueName: \"kubernetes.io/projected/31edad90-c809-4f65-ba1b-e946d4c4cf75-kube-api-access-snkvb\") on node \"crc\" DevicePath \"\"" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.792238 4909 generic.go:334] "Generic (PLEG): container finished" podID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerID="5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd" exitCode=0 Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.792303 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56w6l" event={"ID":"31edad90-c809-4f65-ba1b-e946d4c4cf75","Type":"ContainerDied","Data":"5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd"} Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.792319 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56w6l" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.792349 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56w6l" event={"ID":"31edad90-c809-4f65-ba1b-e946d4c4cf75","Type":"ContainerDied","Data":"4b5ce623b4817a6cf5e72bd90ac7e825fc024e929ff8111bfed9151c0e09e489"} Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.792391 4909 scope.go:117] "RemoveContainer" containerID="5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.830956 4909 scope.go:117] "RemoveContainer" containerID="9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.867492 4909 scope.go:117] "RemoveContainer" containerID="b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.902463 4909 scope.go:117] "RemoveContainer" containerID="5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd" Nov 26 07:56:48 crc kubenswrapper[4909]: E1126 07:56:48.902969 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd\": container with ID starting with 5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd not found: ID does not exist" containerID="5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.903010 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd"} err="failed to get container status \"5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd\": rpc error: code = NotFound desc = could not find container \"5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd\": container with ID starting with 5ab93b3678576bc4f62375f5b59a2b70064a4e34778eefffae83b8db7ac2f6dd not found: ID does not exist" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.903035 4909 scope.go:117] "RemoveContainer" containerID="9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce" Nov 26 07:56:48 crc kubenswrapper[4909]: E1126 07:56:48.903482 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce\": container with ID starting with 9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce not found: ID does not exist" containerID="9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.903510 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce"} err="failed to get container status \"9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce\": rpc error: code = NotFound desc = could not find container \"9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce\": container with ID starting with 9fd0fbb155eb9226472dbd3d16558bf7c672fbf92d0d011032eb2ca914a200ce not found: ID does not exist" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.903527 4909 scope.go:117] "RemoveContainer" containerID="b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159" Nov 26 07:56:48 crc kubenswrapper[4909]: E1126 07:56:48.903830 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159\": container with ID starting with b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159 not found: ID does not exist" containerID="b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159" Nov 26 07:56:48 crc kubenswrapper[4909]: I1126 07:56:48.903862 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159"} err="failed to get container status \"b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159\": rpc error: code = NotFound desc = could not find container \"b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159\": container with ID starting with b1e5f721f7d368e5cc9ea1caabc29d447157800136807d5356dcb92db7163159 not found: ID does not exist" Nov 26 07:56:49 crc kubenswrapper[4909]: I1126 07:56:49.526633 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31edad90-c809-4f65-ba1b-e946d4c4cf75" (UID: "31edad90-c809-4f65-ba1b-e946d4c4cf75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:56:49 crc kubenswrapper[4909]: I1126 07:56:49.596168 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31edad90-c809-4f65-ba1b-e946d4c4cf75-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:56:49 crc kubenswrapper[4909]: I1126 07:56:49.749276 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56w6l"] Nov 26 07:56:49 crc kubenswrapper[4909]: I1126 07:56:49.757581 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56w6l"] Nov 26 07:56:50 crc kubenswrapper[4909]: I1126 07:56:50.511656 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" path="/var/lib/kubelet/pods/31edad90-c809-4f65-ba1b-e946d4c4cf75/volumes" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.457172 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-prf2q"] Nov 26 07:56:57 crc kubenswrapper[4909]: E1126 07:56:57.458249 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="extract-content" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458269 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="extract-content" Nov 26 07:56:57 crc kubenswrapper[4909]: E1126 07:56:57.458290 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="registry-server" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458298 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="registry-server" Nov 26 07:56:57 crc kubenswrapper[4909]: E1126 07:56:57.458328 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="extract-utilities" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458340 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="extract-utilities" Nov 26 07:56:57 crc kubenswrapper[4909]: E1126 07:56:57.458359 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="extract-content" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458367 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="extract-content" Nov 26 07:56:57 crc kubenswrapper[4909]: E1126 07:56:57.458391 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="extract-utilities" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458399 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="extract-utilities" Nov 26 07:56:57 crc kubenswrapper[4909]: E1126 07:56:57.458422 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="registry-server" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458430 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="registry-server" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458661 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="31edad90-c809-4f65-ba1b-e946d4c4cf75" containerName="registry-server" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.458690 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e54c77f-caaa-4d5b-b9b0-c37efcfc7264" containerName="registry-server" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.460315 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.475209 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prf2q"] Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.634405 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-utilities\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.634518 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-catalog-content\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.634561 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkh9f\" (UniqueName: \"kubernetes.io/projected/be365c6e-b302-4f56-9992-a7bb87d8c4e8-kube-api-access-gkh9f\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.735792 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-utilities\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.735904 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-catalog-content\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.736411 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-utilities\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.736466 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-catalog-content\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.736549 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkh9f\" (UniqueName: \"kubernetes.io/projected/be365c6e-b302-4f56-9992-a7bb87d8c4e8-kube-api-access-gkh9f\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.760181 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkh9f\" (UniqueName: \"kubernetes.io/projected/be365c6e-b302-4f56-9992-a7bb87d8c4e8-kube-api-access-gkh9f\") pod \"community-operators-prf2q\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:57 crc kubenswrapper[4909]: I1126 07:56:57.782458 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:56:58 crc kubenswrapper[4909]: I1126 07:56:58.247989 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prf2q"] Nov 26 07:56:58 crc kubenswrapper[4909]: I1126 07:56:58.878637 4909 generic.go:334] "Generic (PLEG): container finished" podID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerID="6eff06a7ddc8e3ab42bb1e8cc74ce35bb500a0e5cdf832fe15271394298868f7" exitCode=0 Nov 26 07:56:58 crc kubenswrapper[4909]: I1126 07:56:58.878825 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prf2q" event={"ID":"be365c6e-b302-4f56-9992-a7bb87d8c4e8","Type":"ContainerDied","Data":"6eff06a7ddc8e3ab42bb1e8cc74ce35bb500a0e5cdf832fe15271394298868f7"} Nov 26 07:56:58 crc kubenswrapper[4909]: I1126 07:56:58.878945 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prf2q" event={"ID":"be365c6e-b302-4f56-9992-a7bb87d8c4e8","Type":"ContainerStarted","Data":"a361964a4d7f7ac1c000368933275bf139493d51dd016dfced5e86679cce5c4f"} Nov 26 07:56:59 crc kubenswrapper[4909]: I1126 07:56:59.889838 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prf2q" event={"ID":"be365c6e-b302-4f56-9992-a7bb87d8c4e8","Type":"ContainerStarted","Data":"d11fc17cb90b3c2c35dbac39b1c3bc720a6f68a94c577bdd1ec6769107ac618c"} Nov 26 07:57:00 crc kubenswrapper[4909]: I1126 07:57:00.924017 4909 generic.go:334] "Generic (PLEG): container finished" podID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerID="d11fc17cb90b3c2c35dbac39b1c3bc720a6f68a94c577bdd1ec6769107ac618c" exitCode=0 Nov 26 07:57:00 crc kubenswrapper[4909]: I1126 07:57:00.924085 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prf2q" event={"ID":"be365c6e-b302-4f56-9992-a7bb87d8c4e8","Type":"ContainerDied","Data":"d11fc17cb90b3c2c35dbac39b1c3bc720a6f68a94c577bdd1ec6769107ac618c"} Nov 26 07:57:01 crc kubenswrapper[4909]: I1126 07:57:01.939133 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prf2q" event={"ID":"be365c6e-b302-4f56-9992-a7bb87d8c4e8","Type":"ContainerStarted","Data":"d1269e6bf8fc2feebf91801b94bef6e4e49310d16c272d74b398d9dfa1f0d833"} Nov 26 07:57:01 crc kubenswrapper[4909]: I1126 07:57:01.972567 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-prf2q" podStartSLOduration=2.527240732 podStartE2EDuration="4.972538892s" podCreationTimestamp="2025-11-26 07:56:57 +0000 UTC" firstStartedPulling="2025-11-26 07:56:58.880511592 +0000 UTC m=+3391.026722768" lastFinishedPulling="2025-11-26 07:57:01.325809762 +0000 UTC m=+3393.472020928" observedRunningTime="2025-11-26 07:57:01.960522545 +0000 UTC m=+3394.106733761" watchObservedRunningTime="2025-11-26 07:57:01.972538892 +0000 UTC m=+3394.118750098" Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.301390 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.302105 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.302171 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.302929 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0077080d92dede8186cc2713354944a03a3d849e9c3d67ed508f9cb7a8cc9449"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.303025 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://0077080d92dede8186cc2713354944a03a3d849e9c3d67ed508f9cb7a8cc9449" gracePeriod=600 Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.783494 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.783916 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.850481 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.994008 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="0077080d92dede8186cc2713354944a03a3d849e9c3d67ed508f9cb7a8cc9449" exitCode=0 Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.994072 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"0077080d92dede8186cc2713354944a03a3d849e9c3d67ed508f9cb7a8cc9449"} Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.994103 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5"} Nov 26 07:57:07 crc kubenswrapper[4909]: I1126 07:57:07.994123 4909 scope.go:117] "RemoveContainer" containerID="dfb06a420e947c90c429cf66c10ff6d35aa89de4e2f0847db1a28f38667193c5" Nov 26 07:57:08 crc kubenswrapper[4909]: I1126 07:57:08.049328 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:57:08 crc kubenswrapper[4909]: I1126 07:57:08.094644 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prf2q"] Nov 26 07:57:10 crc kubenswrapper[4909]: I1126 07:57:10.020199 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-prf2q" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="registry-server" containerID="cri-o://d1269e6bf8fc2feebf91801b94bef6e4e49310d16c272d74b398d9dfa1f0d833" gracePeriod=2 Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.034217 4909 generic.go:334] "Generic (PLEG): container finished" podID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerID="d1269e6bf8fc2feebf91801b94bef6e4e49310d16c272d74b398d9dfa1f0d833" exitCode=0 Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.034266 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prf2q" event={"ID":"be365c6e-b302-4f56-9992-a7bb87d8c4e8","Type":"ContainerDied","Data":"d1269e6bf8fc2feebf91801b94bef6e4e49310d16c272d74b398d9dfa1f0d833"} Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.217105 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.354716 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-utilities\") pod \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.354816 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkh9f\" (UniqueName: \"kubernetes.io/projected/be365c6e-b302-4f56-9992-a7bb87d8c4e8-kube-api-access-gkh9f\") pod \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.354932 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-catalog-content\") pod \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\" (UID: \"be365c6e-b302-4f56-9992-a7bb87d8c4e8\") " Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.355658 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-utilities" (OuterVolumeSpecName: "utilities") pod "be365c6e-b302-4f56-9992-a7bb87d8c4e8" (UID: "be365c6e-b302-4f56-9992-a7bb87d8c4e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.360530 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be365c6e-b302-4f56-9992-a7bb87d8c4e8-kube-api-access-gkh9f" (OuterVolumeSpecName: "kube-api-access-gkh9f") pod "be365c6e-b302-4f56-9992-a7bb87d8c4e8" (UID: "be365c6e-b302-4f56-9992-a7bb87d8c4e8"). InnerVolumeSpecName "kube-api-access-gkh9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.410882 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be365c6e-b302-4f56-9992-a7bb87d8c4e8" (UID: "be365c6e-b302-4f56-9992-a7bb87d8c4e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.457795 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkh9f\" (UniqueName: \"kubernetes.io/projected/be365c6e-b302-4f56-9992-a7bb87d8c4e8-kube-api-access-gkh9f\") on node \"crc\" DevicePath \"\"" Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.457842 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 07:57:11 crc kubenswrapper[4909]: I1126 07:57:11.457858 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be365c6e-b302-4f56-9992-a7bb87d8c4e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.048436 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prf2q" event={"ID":"be365c6e-b302-4f56-9992-a7bb87d8c4e8","Type":"ContainerDied","Data":"a361964a4d7f7ac1c000368933275bf139493d51dd016dfced5e86679cce5c4f"} Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.048800 4909 scope.go:117] "RemoveContainer" containerID="d1269e6bf8fc2feebf91801b94bef6e4e49310d16c272d74b398d9dfa1f0d833" Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.048536 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prf2q" Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.071248 4909 scope.go:117] "RemoveContainer" containerID="d11fc17cb90b3c2c35dbac39b1c3bc720a6f68a94c577bdd1ec6769107ac618c" Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.102968 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prf2q"] Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.112461 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-prf2q"] Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.121438 4909 scope.go:117] "RemoveContainer" containerID="6eff06a7ddc8e3ab42bb1e8cc74ce35bb500a0e5cdf832fe15271394298868f7" Nov 26 07:57:12 crc kubenswrapper[4909]: I1126 07:57:12.506567 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" path="/var/lib/kubelet/pods/be365c6e-b302-4f56-9992-a7bb87d8c4e8/volumes" Nov 26 07:59:07 crc kubenswrapper[4909]: I1126 07:59:07.301157 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:59:07 crc kubenswrapper[4909]: I1126 07:59:07.301808 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 07:59:37 crc kubenswrapper[4909]: I1126 07:59:37.301236 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 07:59:37 crc kubenswrapper[4909]: I1126 07:59:37.301809 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.152118 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh"] Nov 26 08:00:00 crc kubenswrapper[4909]: E1126 08:00:00.152989 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="extract-content" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.153006 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="extract-content" Nov 26 08:00:00 crc kubenswrapper[4909]: E1126 08:00:00.153053 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="extract-utilities" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.153060 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="extract-utilities" Nov 26 08:00:00 crc kubenswrapper[4909]: E1126 08:00:00.153080 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="registry-server" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.153088 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="registry-server" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.153264 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="be365c6e-b302-4f56-9992-a7bb87d8c4e8" containerName="registry-server" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.153892 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.156036 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.157390 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.162195 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh"] Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.190477 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26c1c2a9-3b99-416e-be50-db485df71b18-secret-volume\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.190532 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26c1c2a9-3b99-416e-be50-db485df71b18-config-volume\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.190562 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkq9\" (UniqueName: \"kubernetes.io/projected/26c1c2a9-3b99-416e-be50-db485df71b18-kube-api-access-2fkq9\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.291771 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26c1c2a9-3b99-416e-be50-db485df71b18-secret-volume\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.291844 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26c1c2a9-3b99-416e-be50-db485df71b18-config-volume\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.291873 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fkq9\" (UniqueName: \"kubernetes.io/projected/26c1c2a9-3b99-416e-be50-db485df71b18-kube-api-access-2fkq9\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.293689 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26c1c2a9-3b99-416e-be50-db485df71b18-config-volume\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.306909 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26c1c2a9-3b99-416e-be50-db485df71b18-secret-volume\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.316718 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fkq9\" (UniqueName: \"kubernetes.io/projected/26c1c2a9-3b99-416e-be50-db485df71b18-kube-api-access-2fkq9\") pod \"collect-profiles-29402400-6nxvh\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.492331 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:00 crc kubenswrapper[4909]: I1126 08:00:00.943768 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh"] Nov 26 08:00:01 crc kubenswrapper[4909]: I1126 08:00:01.876928 4909 generic.go:334] "Generic (PLEG): container finished" podID="26c1c2a9-3b99-416e-be50-db485df71b18" containerID="7671d9e1b14def9c86dee2661f26b1aac377fc0ea53aff520ec83103d04a94f3" exitCode=0 Nov 26 08:00:01 crc kubenswrapper[4909]: I1126 08:00:01.877053 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" event={"ID":"26c1c2a9-3b99-416e-be50-db485df71b18","Type":"ContainerDied","Data":"7671d9e1b14def9c86dee2661f26b1aac377fc0ea53aff520ec83103d04a94f3"} Nov 26 08:00:01 crc kubenswrapper[4909]: I1126 08:00:01.877232 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" event={"ID":"26c1c2a9-3b99-416e-be50-db485df71b18","Type":"ContainerStarted","Data":"12aaa60dba93c2c8e0765ba19d0b6dd3065bc9f07d71a8233fb30ea2da3b4acb"} Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.201327 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.341388 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26c1c2a9-3b99-416e-be50-db485df71b18-secret-volume\") pod \"26c1c2a9-3b99-416e-be50-db485df71b18\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.341523 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26c1c2a9-3b99-416e-be50-db485df71b18-config-volume\") pod \"26c1c2a9-3b99-416e-be50-db485df71b18\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.341552 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fkq9\" (UniqueName: \"kubernetes.io/projected/26c1c2a9-3b99-416e-be50-db485df71b18-kube-api-access-2fkq9\") pod \"26c1c2a9-3b99-416e-be50-db485df71b18\" (UID: \"26c1c2a9-3b99-416e-be50-db485df71b18\") " Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.342611 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c1c2a9-3b99-416e-be50-db485df71b18-config-volume" (OuterVolumeSpecName: "config-volume") pod "26c1c2a9-3b99-416e-be50-db485df71b18" (UID: "26c1c2a9-3b99-416e-be50-db485df71b18"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.347072 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c1c2a9-3b99-416e-be50-db485df71b18-kube-api-access-2fkq9" (OuterVolumeSpecName: "kube-api-access-2fkq9") pod "26c1c2a9-3b99-416e-be50-db485df71b18" (UID: "26c1c2a9-3b99-416e-be50-db485df71b18"). InnerVolumeSpecName "kube-api-access-2fkq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.347381 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c1c2a9-3b99-416e-be50-db485df71b18-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26c1c2a9-3b99-416e-be50-db485df71b18" (UID: "26c1c2a9-3b99-416e-be50-db485df71b18"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.442906 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26c1c2a9-3b99-416e-be50-db485df71b18-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.443255 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26c1c2a9-3b99-416e-be50-db485df71b18-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.443738 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fkq9\" (UniqueName: \"kubernetes.io/projected/26c1c2a9-3b99-416e-be50-db485df71b18-kube-api-access-2fkq9\") on node \"crc\" DevicePath \"\"" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.893142 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" event={"ID":"26c1c2a9-3b99-416e-be50-db485df71b18","Type":"ContainerDied","Data":"12aaa60dba93c2c8e0765ba19d0b6dd3065bc9f07d71a8233fb30ea2da3b4acb"} Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.893194 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12aaa60dba93c2c8e0765ba19d0b6dd3065bc9f07d71a8233fb30ea2da3b4acb" Nov 26 08:00:03 crc kubenswrapper[4909]: I1126 08:00:03.893253 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh" Nov 26 08:00:04 crc kubenswrapper[4909]: I1126 08:00:04.290577 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk"] Nov 26 08:00:04 crc kubenswrapper[4909]: I1126 08:00:04.300468 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402355-mfkkk"] Nov 26 08:00:04 crc kubenswrapper[4909]: I1126 08:00:04.515081 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c709137e-6913-47d3-8cbf-3b1ea4c598ef" path="/var/lib/kubelet/pods/c709137e-6913-47d3-8cbf-3b1ea4c598ef/volumes" Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.300837 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.301164 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.301216 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.301882 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.301941 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" gracePeriod=600 Nov 26 08:00:07 crc kubenswrapper[4909]: E1126 08:00:07.431275 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.927040 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" exitCode=0 Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.927123 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5"} Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.927209 4909 scope.go:117] "RemoveContainer" containerID="0077080d92dede8186cc2713354944a03a3d849e9c3d67ed508f9cb7a8cc9449" Nov 26 08:00:07 crc kubenswrapper[4909]: I1126 08:00:07.927719 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:00:07 crc kubenswrapper[4909]: E1126 08:00:07.928033 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:00:19 crc kubenswrapper[4909]: I1126 08:00:19.498850 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:00:19 crc kubenswrapper[4909]: E1126 08:00:19.499464 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:00:34 crc kubenswrapper[4909]: I1126 08:00:34.499296 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:00:34 crc kubenswrapper[4909]: E1126 08:00:34.500696 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:00:36 crc kubenswrapper[4909]: I1126 08:00:36.235292 4909 scope.go:117] "RemoveContainer" containerID="06b6916491440fff175c0fcd648d46fc98acd442352b71d7cb86fcf80ff8af8e" Nov 26 08:00:46 crc kubenswrapper[4909]: I1126 08:00:46.499769 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:00:46 crc kubenswrapper[4909]: E1126 08:00:46.500765 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:00:57 crc kubenswrapper[4909]: I1126 08:00:57.499363 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:00:57 crc kubenswrapper[4909]: E1126 08:00:57.500365 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:01:09 crc kubenswrapper[4909]: I1126 08:01:09.499757 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:01:09 crc kubenswrapper[4909]: E1126 08:01:09.500846 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:01:21 crc kubenswrapper[4909]: I1126 08:01:21.498411 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:01:21 crc kubenswrapper[4909]: E1126 08:01:21.499224 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:01:36 crc kubenswrapper[4909]: I1126 08:01:36.499110 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:01:36 crc kubenswrapper[4909]: E1126 08:01:36.499934 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:01:48 crc kubenswrapper[4909]: I1126 08:01:48.506405 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:01:48 crc kubenswrapper[4909]: E1126 08:01:48.507285 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:02:03 crc kubenswrapper[4909]: I1126 08:02:03.498908 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:02:03 crc kubenswrapper[4909]: E1126 08:02:03.500587 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:02:14 crc kubenswrapper[4909]: I1126 08:02:14.499350 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:02:14 crc kubenswrapper[4909]: E1126 08:02:14.500343 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:02:28 crc kubenswrapper[4909]: I1126 08:02:28.503777 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:02:28 crc kubenswrapper[4909]: E1126 08:02:28.504523 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:02:39 crc kubenswrapper[4909]: I1126 08:02:39.498433 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:02:39 crc kubenswrapper[4909]: E1126 08:02:39.499199 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:02:54 crc kubenswrapper[4909]: I1126 08:02:54.499193 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:02:54 crc kubenswrapper[4909]: E1126 08:02:54.500062 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:03:08 crc kubenswrapper[4909]: I1126 08:03:08.502102 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:03:08 crc kubenswrapper[4909]: E1126 08:03:08.503764 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:03:23 crc kubenswrapper[4909]: I1126 08:03:23.499475 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:03:23 crc kubenswrapper[4909]: E1126 08:03:23.500150 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:03:34 crc kubenswrapper[4909]: I1126 08:03:34.498928 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:03:34 crc kubenswrapper[4909]: E1126 08:03:34.499659 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:03:45 crc kubenswrapper[4909]: I1126 08:03:45.500065 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:03:45 crc kubenswrapper[4909]: E1126 08:03:45.501053 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:04:00 crc kubenswrapper[4909]: I1126 08:04:00.498456 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:04:00 crc kubenswrapper[4909]: E1126 08:04:00.499311 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:04:15 crc kubenswrapper[4909]: I1126 08:04:15.499402 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:04:15 crc kubenswrapper[4909]: E1126 08:04:15.500360 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:04:28 crc kubenswrapper[4909]: I1126 08:04:28.512095 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:04:28 crc kubenswrapper[4909]: E1126 08:04:28.514447 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:04:39 crc kubenswrapper[4909]: I1126 08:04:39.499314 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:04:39 crc kubenswrapper[4909]: E1126 08:04:39.500248 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:04:52 crc kubenswrapper[4909]: I1126 08:04:52.498791 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:04:52 crc kubenswrapper[4909]: E1126 08:04:52.499691 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:05:06 crc kubenswrapper[4909]: I1126 08:05:06.499571 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:05:06 crc kubenswrapper[4909]: E1126 08:05:06.500886 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:05:21 crc kubenswrapper[4909]: I1126 08:05:21.499110 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:05:22 crc kubenswrapper[4909]: I1126 08:05:22.687104 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"8a7bfd1bf34da3cb0afb7a0656490710e1a0d97e36b48c24cf443d2527995b85"} Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.329500 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vxq7k"] Nov 26 08:06:46 crc kubenswrapper[4909]: E1126 08:06:46.330938 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c1c2a9-3b99-416e-be50-db485df71b18" containerName="collect-profiles" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.330968 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c1c2a9-3b99-416e-be50-db485df71b18" containerName="collect-profiles" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.331165 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c1c2a9-3b99-416e-be50-db485df71b18" containerName="collect-profiles" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.332446 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.347986 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxq7k"] Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.482117 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmgc\" (UniqueName: \"kubernetes.io/projected/04534102-28f7-4138-ae28-101ce6318f64-kube-api-access-2xmgc\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.482302 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-utilities\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.482395 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-catalog-content\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.583810 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmgc\" (UniqueName: \"kubernetes.io/projected/04534102-28f7-4138-ae28-101ce6318f64-kube-api-access-2xmgc\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.584037 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-utilities\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.584138 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-catalog-content\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.584656 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-utilities\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.584700 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-catalog-content\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.611193 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmgc\" (UniqueName: \"kubernetes.io/projected/04534102-28f7-4138-ae28-101ce6318f64-kube-api-access-2xmgc\") pod \"certified-operators-vxq7k\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:46 crc kubenswrapper[4909]: I1126 08:06:46.669941 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:47 crc kubenswrapper[4909]: I1126 08:06:47.154546 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxq7k"] Nov 26 08:06:47 crc kubenswrapper[4909]: W1126 08:06:47.162965 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04534102_28f7_4138_ae28_101ce6318f64.slice/crio-3110493f3a53f33703a84066ad8929359fad43363aa9c6ff97337f27a086e057 WatchSource:0}: Error finding container 3110493f3a53f33703a84066ad8929359fad43363aa9c6ff97337f27a086e057: Status 404 returned error can't find the container with id 3110493f3a53f33703a84066ad8929359fad43363aa9c6ff97337f27a086e057 Nov 26 08:06:47 crc kubenswrapper[4909]: I1126 08:06:47.430133 4909 generic.go:334] "Generic (PLEG): container finished" podID="04534102-28f7-4138-ae28-101ce6318f64" containerID="ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657" exitCode=0 Nov 26 08:06:47 crc kubenswrapper[4909]: I1126 08:06:47.430173 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxq7k" event={"ID":"04534102-28f7-4138-ae28-101ce6318f64","Type":"ContainerDied","Data":"ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657"} Nov 26 08:06:47 crc kubenswrapper[4909]: I1126 08:06:47.430198 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxq7k" event={"ID":"04534102-28f7-4138-ae28-101ce6318f64","Type":"ContainerStarted","Data":"3110493f3a53f33703a84066ad8929359fad43363aa9c6ff97337f27a086e057"} Nov 26 08:06:47 crc kubenswrapper[4909]: I1126 08:06:47.433724 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:06:48 crc kubenswrapper[4909]: I1126 08:06:48.442901 4909 generic.go:334] "Generic (PLEG): container finished" podID="04534102-28f7-4138-ae28-101ce6318f64" containerID="9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528" exitCode=0 Nov 26 08:06:48 crc kubenswrapper[4909]: I1126 08:06:48.442954 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxq7k" event={"ID":"04534102-28f7-4138-ae28-101ce6318f64","Type":"ContainerDied","Data":"9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528"} Nov 26 08:06:49 crc kubenswrapper[4909]: I1126 08:06:49.454712 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxq7k" event={"ID":"04534102-28f7-4138-ae28-101ce6318f64","Type":"ContainerStarted","Data":"8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5"} Nov 26 08:06:56 crc kubenswrapper[4909]: I1126 08:06:56.670889 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:56 crc kubenswrapper[4909]: I1126 08:06:56.671547 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:56 crc kubenswrapper[4909]: I1126 08:06:56.871318 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:56 crc kubenswrapper[4909]: I1126 08:06:56.894402 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vxq7k" podStartSLOduration=9.355989064 podStartE2EDuration="10.894387186s" podCreationTimestamp="2025-11-26 08:06:46 +0000 UTC" firstStartedPulling="2025-11-26 08:06:47.433394159 +0000 UTC m=+3979.579605315" lastFinishedPulling="2025-11-26 08:06:48.971792241 +0000 UTC m=+3981.118003437" observedRunningTime="2025-11-26 08:06:49.472567621 +0000 UTC m=+3981.618778787" watchObservedRunningTime="2025-11-26 08:06:56.894387186 +0000 UTC m=+3989.040598352" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.204801 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9kdtx"] Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.206885 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.214156 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kdtx"] Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.346793 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-catalog-content\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.346845 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnj8h\" (UniqueName: \"kubernetes.io/projected/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-kube-api-access-qnj8h\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.346918 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-utilities\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.448660 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-utilities\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.448737 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-catalog-content\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.448762 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnj8h\" (UniqueName: \"kubernetes.io/projected/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-kube-api-access-qnj8h\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.449219 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-utilities\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.449313 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-catalog-content\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.468553 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnj8h\" (UniqueName: \"kubernetes.io/projected/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-kube-api-access-qnj8h\") pod \"redhat-operators-9kdtx\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.534846 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.583897 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:06:57 crc kubenswrapper[4909]: I1126 08:06:57.962366 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kdtx"] Nov 26 08:06:58 crc kubenswrapper[4909]: I1126 08:06:58.542418 4909 generic.go:334] "Generic (PLEG): container finished" podID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerID="a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a" exitCode=0 Nov 26 08:06:58 crc kubenswrapper[4909]: I1126 08:06:58.542969 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kdtx" event={"ID":"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6","Type":"ContainerDied","Data":"a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a"} Nov 26 08:06:58 crc kubenswrapper[4909]: I1126 08:06:58.544853 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kdtx" event={"ID":"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6","Type":"ContainerStarted","Data":"822ae1a8094b9be60cd6894ffca6e5fdb28afab2cd6bb1a32436591d4adae384"} Nov 26 08:06:59 crc kubenswrapper[4909]: I1126 08:06:59.556156 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kdtx" event={"ID":"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6","Type":"ContainerStarted","Data":"89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b"} Nov 26 08:06:59 crc kubenswrapper[4909]: I1126 08:06:59.906287 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxq7k"] Nov 26 08:06:59 crc kubenswrapper[4909]: I1126 08:06:59.906498 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vxq7k" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="registry-server" containerID="cri-o://8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5" gracePeriod=2 Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.312848 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.494362 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-catalog-content\") pod \"04534102-28f7-4138-ae28-101ce6318f64\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.494561 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xmgc\" (UniqueName: \"kubernetes.io/projected/04534102-28f7-4138-ae28-101ce6318f64-kube-api-access-2xmgc\") pod \"04534102-28f7-4138-ae28-101ce6318f64\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.494733 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-utilities\") pod \"04534102-28f7-4138-ae28-101ce6318f64\" (UID: \"04534102-28f7-4138-ae28-101ce6318f64\") " Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.496771 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-utilities" (OuterVolumeSpecName: "utilities") pod "04534102-28f7-4138-ae28-101ce6318f64" (UID: "04534102-28f7-4138-ae28-101ce6318f64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.502340 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04534102-28f7-4138-ae28-101ce6318f64-kube-api-access-2xmgc" (OuterVolumeSpecName: "kube-api-access-2xmgc") pod "04534102-28f7-4138-ae28-101ce6318f64" (UID: "04534102-28f7-4138-ae28-101ce6318f64"). InnerVolumeSpecName "kube-api-access-2xmgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.544565 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04534102-28f7-4138-ae28-101ce6318f64" (UID: "04534102-28f7-4138-ae28-101ce6318f64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.564875 4909 generic.go:334] "Generic (PLEG): container finished" podID="04534102-28f7-4138-ae28-101ce6318f64" containerID="8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5" exitCode=0 Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.564916 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxq7k" event={"ID":"04534102-28f7-4138-ae28-101ce6318f64","Type":"ContainerDied","Data":"8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5"} Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.564954 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxq7k" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.564969 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxq7k" event={"ID":"04534102-28f7-4138-ae28-101ce6318f64","Type":"ContainerDied","Data":"3110493f3a53f33703a84066ad8929359fad43363aa9c6ff97337f27a086e057"} Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.564990 4909 scope.go:117] "RemoveContainer" containerID="8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.568023 4909 generic.go:334] "Generic (PLEG): container finished" podID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerID="89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b" exitCode=0 Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.568066 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kdtx" event={"ID":"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6","Type":"ContainerDied","Data":"89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b"} Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.602122 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.602159 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xmgc\" (UniqueName: \"kubernetes.io/projected/04534102-28f7-4138-ae28-101ce6318f64-kube-api-access-2xmgc\") on node \"crc\" DevicePath \"\"" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.602173 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04534102-28f7-4138-ae28-101ce6318f64-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.603431 4909 scope.go:117] "RemoveContainer" containerID="9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.621134 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxq7k"] Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.626065 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vxq7k"] Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.626889 4909 scope.go:117] "RemoveContainer" containerID="ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.649057 4909 scope.go:117] "RemoveContainer" containerID="8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5" Nov 26 08:07:00 crc kubenswrapper[4909]: E1126 08:07:00.649445 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5\": container with ID starting with 8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5 not found: ID does not exist" containerID="8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.649475 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5"} err="failed to get container status \"8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5\": rpc error: code = NotFound desc = could not find container \"8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5\": container with ID starting with 8d600b63da5f0f9a5527fc96218a51b6e5ffab317acb367d05c3d06ffd2662f5 not found: ID does not exist" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.649496 4909 scope.go:117] "RemoveContainer" containerID="9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528" Nov 26 08:07:00 crc kubenswrapper[4909]: E1126 08:07:00.649927 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528\": container with ID starting with 9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528 not found: ID does not exist" containerID="9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.649990 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528"} err="failed to get container status \"9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528\": rpc error: code = NotFound desc = could not find container \"9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528\": container with ID starting with 9ae5e6966b4d86bce103c2e4416ff3f0f7806fa334894f2426fa7c68ca4c4528 not found: ID does not exist" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.650025 4909 scope.go:117] "RemoveContainer" containerID="ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657" Nov 26 08:07:00 crc kubenswrapper[4909]: E1126 08:07:00.650618 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657\": container with ID starting with ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657 not found: ID does not exist" containerID="ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657" Nov 26 08:07:00 crc kubenswrapper[4909]: I1126 08:07:00.650641 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657"} err="failed to get container status \"ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657\": rpc error: code = NotFound desc = could not find container \"ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657\": container with ID starting with ea5329a59768ee667dad4aa317a9feec1cc0af828d6fe7df2fc1a9764addd657 not found: ID does not exist" Nov 26 08:07:01 crc kubenswrapper[4909]: I1126 08:07:01.578696 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kdtx" event={"ID":"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6","Type":"ContainerStarted","Data":"bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba"} Nov 26 08:07:01 crc kubenswrapper[4909]: I1126 08:07:01.599161 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9kdtx" podStartSLOduration=2.10793724 podStartE2EDuration="4.599145682s" podCreationTimestamp="2025-11-26 08:06:57 +0000 UTC" firstStartedPulling="2025-11-26 08:06:58.544299818 +0000 UTC m=+3990.690510994" lastFinishedPulling="2025-11-26 08:07:01.03550827 +0000 UTC m=+3993.181719436" observedRunningTime="2025-11-26 08:07:01.594580967 +0000 UTC m=+3993.740792133" watchObservedRunningTime="2025-11-26 08:07:01.599145682 +0000 UTC m=+3993.745356848" Nov 26 08:07:02 crc kubenswrapper[4909]: I1126 08:07:02.511949 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04534102-28f7-4138-ae28-101ce6318f64" path="/var/lib/kubelet/pods/04534102-28f7-4138-ae28-101ce6318f64/volumes" Nov 26 08:07:07 crc kubenswrapper[4909]: I1126 08:07:07.535435 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:07:07 crc kubenswrapper[4909]: I1126 08:07:07.535853 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:07:07 crc kubenswrapper[4909]: I1126 08:07:07.603256 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:07:07 crc kubenswrapper[4909]: I1126 08:07:07.694613 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:07:07 crc kubenswrapper[4909]: I1126 08:07:07.836169 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kdtx"] Nov 26 08:07:09 crc kubenswrapper[4909]: I1126 08:07:09.659961 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9kdtx" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="registry-server" containerID="cri-o://bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba" gracePeriod=2 Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.562811 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.672195 4909 generic.go:334] "Generic (PLEG): container finished" podID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerID="bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba" exitCode=0 Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.672240 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kdtx" event={"ID":"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6","Type":"ContainerDied","Data":"bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba"} Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.672269 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kdtx" event={"ID":"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6","Type":"ContainerDied","Data":"822ae1a8094b9be60cd6894ffca6e5fdb28afab2cd6bb1a32436591d4adae384"} Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.672291 4909 scope.go:117] "RemoveContainer" containerID="bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.672288 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kdtx" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.709288 4909 scope.go:117] "RemoveContainer" containerID="89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.738874 4909 scope.go:117] "RemoveContainer" containerID="a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.746143 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnj8h\" (UniqueName: \"kubernetes.io/projected/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-kube-api-access-qnj8h\") pod \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.746237 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-utilities\") pod \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.746267 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-catalog-content\") pod \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\" (UID: \"c8f14d62-82b7-4e5b-8f9f-4cc8938462e6\") " Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.747604 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-utilities" (OuterVolumeSpecName: "utilities") pod "c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" (UID: "c8f14d62-82b7-4e5b-8f9f-4cc8938462e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.755566 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-kube-api-access-qnj8h" (OuterVolumeSpecName: "kube-api-access-qnj8h") pod "c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" (UID: "c8f14d62-82b7-4e5b-8f9f-4cc8938462e6"). InnerVolumeSpecName "kube-api-access-qnj8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.756966 4909 scope.go:117] "RemoveContainer" containerID="bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba" Nov 26 08:07:10 crc kubenswrapper[4909]: E1126 08:07:10.757473 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba\": container with ID starting with bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba not found: ID does not exist" containerID="bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.757506 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba"} err="failed to get container status \"bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba\": rpc error: code = NotFound desc = could not find container \"bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba\": container with ID starting with bb706c3b242b9669cabf742b693345c62b9647c14428dfcf05557617f3c750ba not found: ID does not exist" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.757525 4909 scope.go:117] "RemoveContainer" containerID="89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b" Nov 26 08:07:10 crc kubenswrapper[4909]: E1126 08:07:10.757916 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b\": container with ID starting with 89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b not found: ID does not exist" containerID="89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.757969 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b"} err="failed to get container status \"89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b\": rpc error: code = NotFound desc = could not find container \"89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b\": container with ID starting with 89bab043789a608d63c3f4bd9d8a2cb19995c4cbe86c333d1aabd9fdbab2d09b not found: ID does not exist" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.758103 4909 scope.go:117] "RemoveContainer" containerID="a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a" Nov 26 08:07:10 crc kubenswrapper[4909]: E1126 08:07:10.758483 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a\": container with ID starting with a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a not found: ID does not exist" containerID="a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.758510 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a"} err="failed to get container status \"a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a\": rpc error: code = NotFound desc = could not find container \"a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a\": container with ID starting with a64df6ab4a16306edea968d511bb9a7adbc783fc634c527657943bf77547598a not found: ID does not exist" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.833767 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" (UID: "c8f14d62-82b7-4e5b-8f9f-4cc8938462e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.847438 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnj8h\" (UniqueName: \"kubernetes.io/projected/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-kube-api-access-qnj8h\") on node \"crc\" DevicePath \"\"" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.847462 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:07:10 crc kubenswrapper[4909]: I1126 08:07:10.847472 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:07:11 crc kubenswrapper[4909]: I1126 08:07:11.008562 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kdtx"] Nov 26 08:07:11 crc kubenswrapper[4909]: I1126 08:07:11.020165 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9kdtx"] Nov 26 08:07:12 crc kubenswrapper[4909]: I1126 08:07:12.514240 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" path="/var/lib/kubelet/pods/c8f14d62-82b7-4e5b-8f9f-4cc8938462e6/volumes" Nov 26 08:07:37 crc kubenswrapper[4909]: I1126 08:07:37.301420 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:07:37 crc kubenswrapper[4909]: I1126 08:07:37.302322 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:08:07 crc kubenswrapper[4909]: I1126 08:08:07.301401 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:08:07 crc kubenswrapper[4909]: I1126 08:08:07.301925 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.301184 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.301888 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.301947 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.302630 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a7bfd1bf34da3cb0afb7a0656490710e1a0d97e36b48c24cf443d2527995b85"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.302697 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://8a7bfd1bf34da3cb0afb7a0656490710e1a0d97e36b48c24cf443d2527995b85" gracePeriod=600 Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.455797 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="8a7bfd1bf34da3cb0afb7a0656490710e1a0d97e36b48c24cf443d2527995b85" exitCode=0 Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.455851 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"8a7bfd1bf34da3cb0afb7a0656490710e1a0d97e36b48c24cf443d2527995b85"} Nov 26 08:08:37 crc kubenswrapper[4909]: I1126 08:08:37.455892 4909 scope.go:117] "RemoveContainer" containerID="0d0b87bd8474d32765876d60a8778d59e9d3872b0581cc82ca32b1f9fc86ffa5" Nov 26 08:08:38 crc kubenswrapper[4909]: I1126 08:08:38.465913 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5"} Nov 26 08:10:37 crc kubenswrapper[4909]: I1126 08:10:37.301536 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:10:37 crc kubenswrapper[4909]: I1126 08:10:37.302275 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:11:07 crc kubenswrapper[4909]: I1126 08:11:07.300820 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:11:07 crc kubenswrapper[4909]: I1126 08:11:07.301327 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.301314 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.302190 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.302279 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.303133 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.303235 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" gracePeriod=600 Nov 26 08:11:37 crc kubenswrapper[4909]: E1126 08:11:37.431676 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.918785 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" exitCode=0 Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.919085 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5"} Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.919205 4909 scope.go:117] "RemoveContainer" containerID="8a7bfd1bf34da3cb0afb7a0656490710e1a0d97e36b48c24cf443d2527995b85" Nov 26 08:11:37 crc kubenswrapper[4909]: I1126 08:11:37.919859 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:11:37 crc kubenswrapper[4909]: E1126 08:11:37.920221 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:11:48 crc kubenswrapper[4909]: I1126 08:11:48.502305 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:11:48 crc kubenswrapper[4909]: E1126 08:11:48.502984 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:12:03 crc kubenswrapper[4909]: I1126 08:12:03.499131 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:12:03 crc kubenswrapper[4909]: E1126 08:12:03.499997 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:12:17 crc kubenswrapper[4909]: I1126 08:12:17.498785 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:12:17 crc kubenswrapper[4909]: E1126 08:12:17.499547 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:12:31 crc kubenswrapper[4909]: I1126 08:12:31.498357 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:12:31 crc kubenswrapper[4909]: E1126 08:12:31.499094 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.509396 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-744cq"] Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.514274 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-744cq"] Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.579368 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-gfq28"] Nov 26 08:12:36 crc kubenswrapper[4909]: E1126 08:12:36.579719 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="extract-content" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.579741 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="extract-content" Nov 26 08:12:36 crc kubenswrapper[4909]: E1126 08:12:36.579759 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="registry-server" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.579768 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="registry-server" Nov 26 08:12:36 crc kubenswrapper[4909]: E1126 08:12:36.579790 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="extract-utilities" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.579799 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="extract-utilities" Nov 26 08:12:36 crc kubenswrapper[4909]: E1126 08:12:36.579819 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="extract-content" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.579828 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="extract-content" Nov 26 08:12:36 crc kubenswrapper[4909]: E1126 08:12:36.579843 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="registry-server" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.579850 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="registry-server" Nov 26 08:12:36 crc kubenswrapper[4909]: E1126 08:12:36.579881 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="extract-utilities" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.579889 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="extract-utilities" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.580060 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f14d62-82b7-4e5b-8f9f-4cc8938462e6" containerName="registry-server" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.580079 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="04534102-28f7-4138-ae28-101ce6318f64" containerName="registry-server" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.580706 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.583035 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.583352 4909 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-lv8rh" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.583531 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.583702 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.587868 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gfq28"] Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.636619 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e6d650d9-e1f3-453f-ba63-d7126f510c9e-node-mnt\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.636729 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e6d650d9-e1f3-453f-ba63-d7126f510c9e-crc-storage\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.636776 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87tb\" (UniqueName: \"kubernetes.io/projected/e6d650d9-e1f3-453f-ba63-d7126f510c9e-kube-api-access-s87tb\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.737635 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e6d650d9-e1f3-453f-ba63-d7126f510c9e-crc-storage\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.737699 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s87tb\" (UniqueName: \"kubernetes.io/projected/e6d650d9-e1f3-453f-ba63-d7126f510c9e-kube-api-access-s87tb\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.737755 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e6d650d9-e1f3-453f-ba63-d7126f510c9e-node-mnt\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.738060 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e6d650d9-e1f3-453f-ba63-d7126f510c9e-node-mnt\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.739679 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e6d650d9-e1f3-453f-ba63-d7126f510c9e-crc-storage\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.768110 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s87tb\" (UniqueName: \"kubernetes.io/projected/e6d650d9-e1f3-453f-ba63-d7126f510c9e-kube-api-access-s87tb\") pod \"crc-storage-crc-gfq28\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:36 crc kubenswrapper[4909]: I1126 08:12:36.898994 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:37 crc kubenswrapper[4909]: I1126 08:12:37.307232 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gfq28"] Nov 26 08:12:37 crc kubenswrapper[4909]: I1126 08:12:37.325363 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:12:37 crc kubenswrapper[4909]: I1126 08:12:37.415829 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gfq28" event={"ID":"e6d650d9-e1f3-453f-ba63-d7126f510c9e","Type":"ContainerStarted","Data":"3760ff88ffb1df48e145bc1f976e2f6bbc4cbf206870c77d5b15117e24fbaba5"} Nov 26 08:12:38 crc kubenswrapper[4909]: I1126 08:12:38.423923 4909 generic.go:334] "Generic (PLEG): container finished" podID="e6d650d9-e1f3-453f-ba63-d7126f510c9e" containerID="c44bb7f8fb8138ed205a2a4236b5703cbba0418bf653ca78683b63f6f1ee2578" exitCode=0 Nov 26 08:12:38 crc kubenswrapper[4909]: I1126 08:12:38.424144 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gfq28" event={"ID":"e6d650d9-e1f3-453f-ba63-d7126f510c9e","Type":"ContainerDied","Data":"c44bb7f8fb8138ed205a2a4236b5703cbba0418bf653ca78683b63f6f1ee2578"} Nov 26 08:12:38 crc kubenswrapper[4909]: I1126 08:12:38.510234 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd" path="/var/lib/kubelet/pods/c012c587-5ebe-4f4f-92a1-1e46ed6bf4cd/volumes" Nov 26 08:12:39 crc kubenswrapper[4909]: I1126 08:12:39.889611 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.084682 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e6d650d9-e1f3-453f-ba63-d7126f510c9e-node-mnt\") pod \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.085082 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s87tb\" (UniqueName: \"kubernetes.io/projected/e6d650d9-e1f3-453f-ba63-d7126f510c9e-kube-api-access-s87tb\") pod \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.085122 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6d650d9-e1f3-453f-ba63-d7126f510c9e-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "e6d650d9-e1f3-453f-ba63-d7126f510c9e" (UID: "e6d650d9-e1f3-453f-ba63-d7126f510c9e"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.086217 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e6d650d9-e1f3-453f-ba63-d7126f510c9e-crc-storage\") pod \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\" (UID: \"e6d650d9-e1f3-453f-ba63-d7126f510c9e\") " Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.086436 4909 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e6d650d9-e1f3-453f-ba63-d7126f510c9e-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.091965 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d650d9-e1f3-453f-ba63-d7126f510c9e-kube-api-access-s87tb" (OuterVolumeSpecName: "kube-api-access-s87tb") pod "e6d650d9-e1f3-453f-ba63-d7126f510c9e" (UID: "e6d650d9-e1f3-453f-ba63-d7126f510c9e"). InnerVolumeSpecName "kube-api-access-s87tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.103735 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6d650d9-e1f3-453f-ba63-d7126f510c9e-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "e6d650d9-e1f3-453f-ba63-d7126f510c9e" (UID: "e6d650d9-e1f3-453f-ba63-d7126f510c9e"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.187297 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s87tb\" (UniqueName: \"kubernetes.io/projected/e6d650d9-e1f3-453f-ba63-d7126f510c9e-kube-api-access-s87tb\") on node \"crc\" DevicePath \"\"" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.187346 4909 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e6d650d9-e1f3-453f-ba63-d7126f510c9e-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.445972 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gfq28" event={"ID":"e6d650d9-e1f3-453f-ba63-d7126f510c9e","Type":"ContainerDied","Data":"3760ff88ffb1df48e145bc1f976e2f6bbc4cbf206870c77d5b15117e24fbaba5"} Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.446025 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gfq28" Nov 26 08:12:40 crc kubenswrapper[4909]: I1126 08:12:40.446029 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3760ff88ffb1df48e145bc1f976e2f6bbc4cbf206870c77d5b15117e24fbaba5" Nov 26 08:12:41 crc kubenswrapper[4909]: I1126 08:12:41.954158 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-gfq28"] Nov 26 08:12:41 crc kubenswrapper[4909]: I1126 08:12:41.959309 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-gfq28"] Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.101766 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-6mss2"] Nov 26 08:12:42 crc kubenswrapper[4909]: E1126 08:12:42.102158 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d650d9-e1f3-453f-ba63-d7126f510c9e" containerName="storage" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.102187 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d650d9-e1f3-453f-ba63-d7126f510c9e" containerName="storage" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.102384 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d650d9-e1f3-453f-ba63-d7126f510c9e" containerName="storage" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.102999 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.106190 4909 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-lv8rh" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.106692 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.106786 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.106810 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.112776 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjnr\" (UniqueName: \"kubernetes.io/projected/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-kube-api-access-jrjnr\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.112940 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-crc-storage\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.113075 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-node-mnt\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.125542 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6mss2"] Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.214238 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-node-mnt\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.214484 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjnr\" (UniqueName: \"kubernetes.io/projected/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-kube-api-access-jrjnr\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.214537 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-node-mnt\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.214727 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-crc-storage\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.215805 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-crc-storage\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.233685 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjnr\" (UniqueName: \"kubernetes.io/projected/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-kube-api-access-jrjnr\") pod \"crc-storage-crc-6mss2\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.436321 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.512000 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d650d9-e1f3-453f-ba63-d7126f510c9e" path="/var/lib/kubelet/pods/e6d650d9-e1f3-453f-ba63-d7126f510c9e/volumes" Nov 26 08:12:42 crc kubenswrapper[4909]: W1126 08:12:42.650842 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7423ddf_fc45_4ede_9e21_d5aaa74faf98.slice/crio-085e9909431e890f38174ad61609aee500b29c352da022d98d60d3b9299f949c WatchSource:0}: Error finding container 085e9909431e890f38174ad61609aee500b29c352da022d98d60d3b9299f949c: Status 404 returned error can't find the container with id 085e9909431e890f38174ad61609aee500b29c352da022d98d60d3b9299f949c Nov 26 08:12:42 crc kubenswrapper[4909]: I1126 08:12:42.650965 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6mss2"] Nov 26 08:12:43 crc kubenswrapper[4909]: I1126 08:12:43.470907 4909 generic.go:334] "Generic (PLEG): container finished" podID="f7423ddf-fc45-4ede-9e21-d5aaa74faf98" containerID="0b7140b6957e936bac50a327914e828488f5fa72a644135b53be8751457d7a1b" exitCode=0 Nov 26 08:12:43 crc kubenswrapper[4909]: I1126 08:12:43.470962 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6mss2" event={"ID":"f7423ddf-fc45-4ede-9e21-d5aaa74faf98","Type":"ContainerDied","Data":"0b7140b6957e936bac50a327914e828488f5fa72a644135b53be8751457d7a1b"} Nov 26 08:12:43 crc kubenswrapper[4909]: I1126 08:12:43.471188 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6mss2" event={"ID":"f7423ddf-fc45-4ede-9e21-d5aaa74faf98","Type":"ContainerStarted","Data":"085e9909431e890f38174ad61609aee500b29c352da022d98d60d3b9299f949c"} Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.780880 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.854975 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjnr\" (UniqueName: \"kubernetes.io/projected/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-kube-api-access-jrjnr\") pod \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.855059 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-crc-storage\") pod \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.855083 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-node-mnt\") pod \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\" (UID: \"f7423ddf-fc45-4ede-9e21-d5aaa74faf98\") " Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.855316 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "f7423ddf-fc45-4ede-9e21-d5aaa74faf98" (UID: "f7423ddf-fc45-4ede-9e21-d5aaa74faf98"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.855789 4909 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.859686 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-kube-api-access-jrjnr" (OuterVolumeSpecName: "kube-api-access-jrjnr") pod "f7423ddf-fc45-4ede-9e21-d5aaa74faf98" (UID: "f7423ddf-fc45-4ede-9e21-d5aaa74faf98"). InnerVolumeSpecName "kube-api-access-jrjnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.885302 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "f7423ddf-fc45-4ede-9e21-d5aaa74faf98" (UID: "f7423ddf-fc45-4ede-9e21-d5aaa74faf98"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.957292 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjnr\" (UniqueName: \"kubernetes.io/projected/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-kube-api-access-jrjnr\") on node \"crc\" DevicePath \"\"" Nov 26 08:12:44 crc kubenswrapper[4909]: I1126 08:12:44.957336 4909 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7423ddf-fc45-4ede-9e21-d5aaa74faf98-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 26 08:12:45 crc kubenswrapper[4909]: I1126 08:12:45.492846 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6mss2" event={"ID":"f7423ddf-fc45-4ede-9e21-d5aaa74faf98","Type":"ContainerDied","Data":"085e9909431e890f38174ad61609aee500b29c352da022d98d60d3b9299f949c"} Nov 26 08:12:45 crc kubenswrapper[4909]: I1126 08:12:45.492911 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="085e9909431e890f38174ad61609aee500b29c352da022d98d60d3b9299f949c" Nov 26 08:12:45 crc kubenswrapper[4909]: I1126 08:12:45.492954 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mss2" Nov 26 08:12:45 crc kubenswrapper[4909]: I1126 08:12:45.499774 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:12:45 crc kubenswrapper[4909]: E1126 08:12:45.500117 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:13:00 crc kubenswrapper[4909]: I1126 08:13:00.498851 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:13:00 crc kubenswrapper[4909]: E1126 08:13:00.500763 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:13:12 crc kubenswrapper[4909]: I1126 08:13:12.499462 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:13:12 crc kubenswrapper[4909]: E1126 08:13:12.500489 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:13:24 crc kubenswrapper[4909]: I1126 08:13:24.499835 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:13:24 crc kubenswrapper[4909]: E1126 08:13:24.500819 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:13:35 crc kubenswrapper[4909]: I1126 08:13:35.499133 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:13:35 crc kubenswrapper[4909]: E1126 08:13:35.500248 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:13:36 crc kubenswrapper[4909]: I1126 08:13:36.563694 4909 scope.go:117] "RemoveContainer" containerID="d94813654796775feda9c1790440b220cdc17f20fd05eb78ca9acc78d3d9d895" Nov 26 08:13:48 crc kubenswrapper[4909]: I1126 08:13:48.503779 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:13:48 crc kubenswrapper[4909]: E1126 08:13:48.504626 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:14:02 crc kubenswrapper[4909]: I1126 08:14:02.499576 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:14:02 crc kubenswrapper[4909]: E1126 08:14:02.500953 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:14:17 crc kubenswrapper[4909]: I1126 08:14:17.498776 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:14:17 crc kubenswrapper[4909]: E1126 08:14:17.499462 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.728184 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2pr6n"] Nov 26 08:14:31 crc kubenswrapper[4909]: E1126 08:14:31.728961 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7423ddf-fc45-4ede-9e21-d5aaa74faf98" containerName="storage" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.728973 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7423ddf-fc45-4ede-9e21-d5aaa74faf98" containerName="storage" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.729106 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7423ddf-fc45-4ede-9e21-d5aaa74faf98" containerName="storage" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.730109 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.746914 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pr6n"] Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.872493 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxcm2\" (UniqueName: \"kubernetes.io/projected/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-kube-api-access-vxcm2\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.872549 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-utilities\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.872711 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-catalog-content\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.973620 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-catalog-content\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.973710 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxcm2\" (UniqueName: \"kubernetes.io/projected/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-kube-api-access-vxcm2\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.973736 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-utilities\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.974271 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-utilities\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:31 crc kubenswrapper[4909]: I1126 08:14:31.974538 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-catalog-content\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:32 crc kubenswrapper[4909]: I1126 08:14:32.003541 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxcm2\" (UniqueName: \"kubernetes.io/projected/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-kube-api-access-vxcm2\") pod \"redhat-marketplace-2pr6n\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:32 crc kubenswrapper[4909]: I1126 08:14:32.057977 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:32 crc kubenswrapper[4909]: I1126 08:14:32.303273 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pr6n"] Nov 26 08:14:32 crc kubenswrapper[4909]: I1126 08:14:32.386130 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pr6n" event={"ID":"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9","Type":"ContainerStarted","Data":"8521346a3bdfea4e08eef8f3d626b91e8c0a921a14083d8228945690dcbb1f8d"} Nov 26 08:14:32 crc kubenswrapper[4909]: I1126 08:14:32.499084 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:14:32 crc kubenswrapper[4909]: E1126 08:14:32.499355 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:14:33 crc kubenswrapper[4909]: I1126 08:14:33.397635 4909 generic.go:334] "Generic (PLEG): container finished" podID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerID="34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584" exitCode=0 Nov 26 08:14:33 crc kubenswrapper[4909]: I1126 08:14:33.397678 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pr6n" event={"ID":"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9","Type":"ContainerDied","Data":"34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584"} Nov 26 08:14:34 crc kubenswrapper[4909]: I1126 08:14:34.408115 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pr6n" event={"ID":"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9","Type":"ContainerStarted","Data":"ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e"} Nov 26 08:14:35 crc kubenswrapper[4909]: I1126 08:14:35.417559 4909 generic.go:334] "Generic (PLEG): container finished" podID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerID="ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e" exitCode=0 Nov 26 08:14:35 crc kubenswrapper[4909]: I1126 08:14:35.417560 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pr6n" event={"ID":"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9","Type":"ContainerDied","Data":"ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e"} Nov 26 08:14:36 crc kubenswrapper[4909]: I1126 08:14:36.428306 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pr6n" event={"ID":"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9","Type":"ContainerStarted","Data":"365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839"} Nov 26 08:14:36 crc kubenswrapper[4909]: I1126 08:14:36.451592 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2pr6n" podStartSLOduration=3.044132116 podStartE2EDuration="5.451566996s" podCreationTimestamp="2025-11-26 08:14:31 +0000 UTC" firstStartedPulling="2025-11-26 08:14:33.399434298 +0000 UTC m=+4445.545645504" lastFinishedPulling="2025-11-26 08:14:35.806869218 +0000 UTC m=+4447.953080384" observedRunningTime="2025-11-26 08:14:36.446721724 +0000 UTC m=+4448.592932910" watchObservedRunningTime="2025-11-26 08:14:36.451566996 +0000 UTC m=+4448.597778162" Nov 26 08:14:42 crc kubenswrapper[4909]: I1126 08:14:42.058469 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:42 crc kubenswrapper[4909]: I1126 08:14:42.059085 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:42 crc kubenswrapper[4909]: I1126 08:14:42.122069 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:42 crc kubenswrapper[4909]: I1126 08:14:42.526476 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:42 crc kubenswrapper[4909]: I1126 08:14:42.589161 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pr6n"] Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.501788 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2pr6n" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="registry-server" containerID="cri-o://365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839" gracePeriod=2 Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.891221 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.971481 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-utilities\") pod \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.971628 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-catalog-content\") pod \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.971723 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxcm2\" (UniqueName: \"kubernetes.io/projected/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-kube-api-access-vxcm2\") pod \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\" (UID: \"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9\") " Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.972734 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-utilities" (OuterVolumeSpecName: "utilities") pod "9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" (UID: "9ae2efa9-4779-4bbb-93f2-521ddf6d6da9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.976914 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-kube-api-access-vxcm2" (OuterVolumeSpecName: "kube-api-access-vxcm2") pod "9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" (UID: "9ae2efa9-4779-4bbb-93f2-521ddf6d6da9"). InnerVolumeSpecName "kube-api-access-vxcm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:14:44 crc kubenswrapper[4909]: I1126 08:14:44.994559 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" (UID: "9ae2efa9-4779-4bbb-93f2-521ddf6d6da9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.073789 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.073826 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxcm2\" (UniqueName: \"kubernetes.io/projected/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-kube-api-access-vxcm2\") on node \"crc\" DevicePath \"\"" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.073839 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.520641 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pr6n" event={"ID":"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9","Type":"ContainerDied","Data":"365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839"} Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.520691 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pr6n" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.520745 4909 scope.go:117] "RemoveContainer" containerID="365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.521068 4909 generic.go:334] "Generic (PLEG): container finished" podID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerID="365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839" exitCode=0 Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.521112 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pr6n" event={"ID":"9ae2efa9-4779-4bbb-93f2-521ddf6d6da9","Type":"ContainerDied","Data":"8521346a3bdfea4e08eef8f3d626b91e8c0a921a14083d8228945690dcbb1f8d"} Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.543558 4909 scope.go:117] "RemoveContainer" containerID="ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.562881 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pr6n"] Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.566409 4909 scope.go:117] "RemoveContainer" containerID="34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.569110 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pr6n"] Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.608887 4909 scope.go:117] "RemoveContainer" containerID="365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839" Nov 26 08:14:45 crc kubenswrapper[4909]: E1126 08:14:45.609403 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839\": container with ID starting with 365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839 not found: ID does not exist" containerID="365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.609455 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839"} err="failed to get container status \"365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839\": rpc error: code = NotFound desc = could not find container \"365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839\": container with ID starting with 365caad13e356afe892efa59b6738fe1bb4e29d95db8ede1c752017f9a2da839 not found: ID does not exist" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.609503 4909 scope.go:117] "RemoveContainer" containerID="ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e" Nov 26 08:14:45 crc kubenswrapper[4909]: E1126 08:14:45.609863 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e\": container with ID starting with ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e not found: ID does not exist" containerID="ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.609894 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e"} err="failed to get container status \"ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e\": rpc error: code = NotFound desc = could not find container \"ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e\": container with ID starting with ed87358ccec8139e3266d3589daf1bf9dd9c43dddc44a9fd95753be14a4db14e not found: ID does not exist" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.609919 4909 scope.go:117] "RemoveContainer" containerID="34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584" Nov 26 08:14:45 crc kubenswrapper[4909]: E1126 08:14:45.610151 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584\": container with ID starting with 34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584 not found: ID does not exist" containerID="34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.610179 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584"} err="failed to get container status \"34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584\": rpc error: code = NotFound desc = could not find container \"34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584\": container with ID starting with 34e4e538960f43c761bfc75bf19d99fbabbc162cc6afd37738e74e268382d584 not found: ID does not exist" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.962673 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tzjx5"] Nov 26 08:14:45 crc kubenswrapper[4909]: E1126 08:14:45.962997 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="registry-server" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.963013 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="registry-server" Nov 26 08:14:45 crc kubenswrapper[4909]: E1126 08:14:45.963029 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="extract-utilities" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.963038 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="extract-utilities" Nov 26 08:14:45 crc kubenswrapper[4909]: E1126 08:14:45.963070 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="extract-content" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.963079 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="extract-content" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.963288 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" containerName="registry-server" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.964749 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:45 crc kubenswrapper[4909]: I1126 08:14:45.979637 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzjx5"] Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.096005 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-catalog-content\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.096172 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-utilities\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.096208 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dltz4\" (UniqueName: \"kubernetes.io/projected/f522c749-8831-44b2-88c8-c288991cf327-kube-api-access-dltz4\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.198291 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-utilities\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.198678 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dltz4\" (UniqueName: \"kubernetes.io/projected/f522c749-8831-44b2-88c8-c288991cf327-kube-api-access-dltz4\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.198898 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-catalog-content\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.199355 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-catalog-content\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.199560 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-utilities\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.217369 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dltz4\" (UniqueName: \"kubernetes.io/projected/f522c749-8831-44b2-88c8-c288991cf327-kube-api-access-dltz4\") pod \"community-operators-tzjx5\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.296941 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.510268 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae2efa9-4779-4bbb-93f2-521ddf6d6da9" path="/var/lib/kubelet/pods/9ae2efa9-4779-4bbb-93f2-521ddf6d6da9/volumes" Nov 26 08:14:46 crc kubenswrapper[4909]: I1126 08:14:46.777872 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzjx5"] Nov 26 08:14:47 crc kubenswrapper[4909]: I1126 08:14:47.498458 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:14:47 crc kubenswrapper[4909]: E1126 08:14:47.499004 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:14:47 crc kubenswrapper[4909]: I1126 08:14:47.540806 4909 generic.go:334] "Generic (PLEG): container finished" podID="f522c749-8831-44b2-88c8-c288991cf327" containerID="4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3" exitCode=0 Nov 26 08:14:47 crc kubenswrapper[4909]: I1126 08:14:47.540864 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzjx5" event={"ID":"f522c749-8831-44b2-88c8-c288991cf327","Type":"ContainerDied","Data":"4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3"} Nov 26 08:14:47 crc kubenswrapper[4909]: I1126 08:14:47.540902 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzjx5" event={"ID":"f522c749-8831-44b2-88c8-c288991cf327","Type":"ContainerStarted","Data":"1f731357f89abf7690e09b5f44fa01f269b8414b1d946ab621be7654ac7dc62c"} Nov 26 08:14:48 crc kubenswrapper[4909]: I1126 08:14:48.554338 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzjx5" event={"ID":"f522c749-8831-44b2-88c8-c288991cf327","Type":"ContainerStarted","Data":"c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6"} Nov 26 08:14:49 crc kubenswrapper[4909]: I1126 08:14:49.567230 4909 generic.go:334] "Generic (PLEG): container finished" podID="f522c749-8831-44b2-88c8-c288991cf327" containerID="c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6" exitCode=0 Nov 26 08:14:49 crc kubenswrapper[4909]: I1126 08:14:49.568032 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzjx5" event={"ID":"f522c749-8831-44b2-88c8-c288991cf327","Type":"ContainerDied","Data":"c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6"} Nov 26 08:14:50 crc kubenswrapper[4909]: I1126 08:14:50.577361 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzjx5" event={"ID":"f522c749-8831-44b2-88c8-c288991cf327","Type":"ContainerStarted","Data":"81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1"} Nov 26 08:14:50 crc kubenswrapper[4909]: I1126 08:14:50.608781 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tzjx5" podStartSLOduration=3.197964009 podStartE2EDuration="5.608761132s" podCreationTimestamp="2025-11-26 08:14:45 +0000 UTC" firstStartedPulling="2025-11-26 08:14:47.544073831 +0000 UTC m=+4459.690285037" lastFinishedPulling="2025-11-26 08:14:49.954870994 +0000 UTC m=+4462.101082160" observedRunningTime="2025-11-26 08:14:50.598678768 +0000 UTC m=+4462.744889944" watchObservedRunningTime="2025-11-26 08:14:50.608761132 +0000 UTC m=+4462.754972308" Nov 26 08:14:56 crc kubenswrapper[4909]: I1126 08:14:56.297622 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:56 crc kubenswrapper[4909]: I1126 08:14:56.298139 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:56 crc kubenswrapper[4909]: I1126 08:14:56.344936 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:56 crc kubenswrapper[4909]: I1126 08:14:56.767792 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:56 crc kubenswrapper[4909]: I1126 08:14:56.807235 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzjx5"] Nov 26 08:14:58 crc kubenswrapper[4909]: I1126 08:14:58.651140 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tzjx5" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="registry-server" containerID="cri-o://81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1" gracePeriod=2 Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.054435 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.199014 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-utilities\") pod \"f522c749-8831-44b2-88c8-c288991cf327\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.199060 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-catalog-content\") pod \"f522c749-8831-44b2-88c8-c288991cf327\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.199148 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dltz4\" (UniqueName: \"kubernetes.io/projected/f522c749-8831-44b2-88c8-c288991cf327-kube-api-access-dltz4\") pod \"f522c749-8831-44b2-88c8-c288991cf327\" (UID: \"f522c749-8831-44b2-88c8-c288991cf327\") " Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.200437 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-utilities" (OuterVolumeSpecName: "utilities") pod "f522c749-8831-44b2-88c8-c288991cf327" (UID: "f522c749-8831-44b2-88c8-c288991cf327"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.204390 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f522c749-8831-44b2-88c8-c288991cf327-kube-api-access-dltz4" (OuterVolumeSpecName: "kube-api-access-dltz4") pod "f522c749-8831-44b2-88c8-c288991cf327" (UID: "f522c749-8831-44b2-88c8-c288991cf327"). InnerVolumeSpecName "kube-api-access-dltz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.300893 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.300927 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dltz4\" (UniqueName: \"kubernetes.io/projected/f522c749-8831-44b2-88c8-c288991cf327-kube-api-access-dltz4\") on node \"crc\" DevicePath \"\"" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.396893 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f522c749-8831-44b2-88c8-c288991cf327" (UID: "f522c749-8831-44b2-88c8-c288991cf327"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.402821 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f522c749-8831-44b2-88c8-c288991cf327-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.665916 4909 generic.go:334] "Generic (PLEG): container finished" podID="f522c749-8831-44b2-88c8-c288991cf327" containerID="81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1" exitCode=0 Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.665974 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzjx5" event={"ID":"f522c749-8831-44b2-88c8-c288991cf327","Type":"ContainerDied","Data":"81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1"} Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.666019 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzjx5" event={"ID":"f522c749-8831-44b2-88c8-c288991cf327","Type":"ContainerDied","Data":"1f731357f89abf7690e09b5f44fa01f269b8414b1d946ab621be7654ac7dc62c"} Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.666041 4909 scope.go:117] "RemoveContainer" containerID="81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.666041 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzjx5" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.703277 4909 scope.go:117] "RemoveContainer" containerID="c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.704626 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzjx5"] Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.713994 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tzjx5"] Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.737823 4909 scope.go:117] "RemoveContainer" containerID="4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.754325 4909 scope.go:117] "RemoveContainer" containerID="81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1" Nov 26 08:14:59 crc kubenswrapper[4909]: E1126 08:14:59.754787 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1\": container with ID starting with 81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1 not found: ID does not exist" containerID="81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.754848 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1"} err="failed to get container status \"81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1\": rpc error: code = NotFound desc = could not find container \"81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1\": container with ID starting with 81c2fd5665758134eae95baa8bb6dc83253380fc32349937328c5532aec1c5a1 not found: ID does not exist" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.754884 4909 scope.go:117] "RemoveContainer" containerID="c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6" Nov 26 08:14:59 crc kubenswrapper[4909]: E1126 08:14:59.755305 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6\": container with ID starting with c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6 not found: ID does not exist" containerID="c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.755342 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6"} err="failed to get container status \"c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6\": rpc error: code = NotFound desc = could not find container \"c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6\": container with ID starting with c257f6e1e1cc4668b46e8d4d4f42789df5f960c3a584420e1286ded5662892d6 not found: ID does not exist" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.755365 4909 scope.go:117] "RemoveContainer" containerID="4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3" Nov 26 08:14:59 crc kubenswrapper[4909]: E1126 08:14:59.755632 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3\": container with ID starting with 4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3 not found: ID does not exist" containerID="4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3" Nov 26 08:14:59 crc kubenswrapper[4909]: I1126 08:14:59.755662 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3"} err="failed to get container status \"4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3\": rpc error: code = NotFound desc = could not find container \"4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3\": container with ID starting with 4144fe88baa41025e3c8024ad830c08007b6064790046a20d8496eb107437fe3 not found: ID does not exist" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.156623 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4"] Nov 26 08:15:00 crc kubenswrapper[4909]: E1126 08:15:00.157446 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="extract-content" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.157476 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="extract-content" Nov 26 08:15:00 crc kubenswrapper[4909]: E1126 08:15:00.157501 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="extract-utilities" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.157512 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="extract-utilities" Nov 26 08:15:00 crc kubenswrapper[4909]: E1126 08:15:00.157555 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="registry-server" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.157567 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="registry-server" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.157878 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f522c749-8831-44b2-88c8-c288991cf327" containerName="registry-server" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.158820 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.162368 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.162445 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.167120 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4"] Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.318034 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609aeb0b-9285-419e-986d-5b3bd41468c8-secret-volume\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.318132 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsh5q\" (UniqueName: \"kubernetes.io/projected/609aeb0b-9285-419e-986d-5b3bd41468c8-kube-api-access-bsh5q\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.318182 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609aeb0b-9285-419e-986d-5b3bd41468c8-config-volume\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.420045 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609aeb0b-9285-419e-986d-5b3bd41468c8-config-volume\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.420178 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609aeb0b-9285-419e-986d-5b3bd41468c8-secret-volume\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.420208 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsh5q\" (UniqueName: \"kubernetes.io/projected/609aeb0b-9285-419e-986d-5b3bd41468c8-kube-api-access-bsh5q\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.422018 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609aeb0b-9285-419e-986d-5b3bd41468c8-config-volume\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.433766 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609aeb0b-9285-419e-986d-5b3bd41468c8-secret-volume\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.452469 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsh5q\" (UniqueName: \"kubernetes.io/projected/609aeb0b-9285-419e-986d-5b3bd41468c8-kube-api-access-bsh5q\") pod \"collect-profiles-29402415-7jbd4\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.486211 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.510468 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f522c749-8831-44b2-88c8-c288991cf327" path="/var/lib/kubelet/pods/f522c749-8831-44b2-88c8-c288991cf327/volumes" Nov 26 08:15:00 crc kubenswrapper[4909]: I1126 08:15:00.906358 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4"] Nov 26 08:15:01 crc kubenswrapper[4909]: I1126 08:15:01.499024 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:15:01 crc kubenswrapper[4909]: E1126 08:15:01.499263 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:15:01 crc kubenswrapper[4909]: I1126 08:15:01.686339 4909 generic.go:334] "Generic (PLEG): container finished" podID="609aeb0b-9285-419e-986d-5b3bd41468c8" containerID="200aeef3a31c5b1e855def9c5cc6bc4d697083bb276c984f430b753f13b116f8" exitCode=0 Nov 26 08:15:01 crc kubenswrapper[4909]: I1126 08:15:01.686449 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" event={"ID":"609aeb0b-9285-419e-986d-5b3bd41468c8","Type":"ContainerDied","Data":"200aeef3a31c5b1e855def9c5cc6bc4d697083bb276c984f430b753f13b116f8"} Nov 26 08:15:01 crc kubenswrapper[4909]: I1126 08:15:01.686681 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" event={"ID":"609aeb0b-9285-419e-986d-5b3bd41468c8","Type":"ContainerStarted","Data":"8ed0a2158d3abbee429b5f8ff697bc8751a211201dc4b376e72044b3828cca5c"} Nov 26 08:15:02 crc kubenswrapper[4909]: I1126 08:15:02.992951 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.054688 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsh5q\" (UniqueName: \"kubernetes.io/projected/609aeb0b-9285-419e-986d-5b3bd41468c8-kube-api-access-bsh5q\") pod \"609aeb0b-9285-419e-986d-5b3bd41468c8\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.054822 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609aeb0b-9285-419e-986d-5b3bd41468c8-config-volume\") pod \"609aeb0b-9285-419e-986d-5b3bd41468c8\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.054916 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609aeb0b-9285-419e-986d-5b3bd41468c8-secret-volume\") pod \"609aeb0b-9285-419e-986d-5b3bd41468c8\" (UID: \"609aeb0b-9285-419e-986d-5b3bd41468c8\") " Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.055534 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609aeb0b-9285-419e-986d-5b3bd41468c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "609aeb0b-9285-419e-986d-5b3bd41468c8" (UID: "609aeb0b-9285-419e-986d-5b3bd41468c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.060169 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609aeb0b-9285-419e-986d-5b3bd41468c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "609aeb0b-9285-419e-986d-5b3bd41468c8" (UID: "609aeb0b-9285-419e-986d-5b3bd41468c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.061486 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609aeb0b-9285-419e-986d-5b3bd41468c8-kube-api-access-bsh5q" (OuterVolumeSpecName: "kube-api-access-bsh5q") pod "609aeb0b-9285-419e-986d-5b3bd41468c8" (UID: "609aeb0b-9285-419e-986d-5b3bd41468c8"). InnerVolumeSpecName "kube-api-access-bsh5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.158155 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609aeb0b-9285-419e-986d-5b3bd41468c8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.158225 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609aeb0b-9285-419e-986d-5b3bd41468c8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.158239 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsh5q\" (UniqueName: \"kubernetes.io/projected/609aeb0b-9285-419e-986d-5b3bd41468c8-kube-api-access-bsh5q\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.712763 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" event={"ID":"609aeb0b-9285-419e-986d-5b3bd41468c8","Type":"ContainerDied","Data":"8ed0a2158d3abbee429b5f8ff697bc8751a211201dc4b376e72044b3828cca5c"} Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.712803 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ed0a2158d3abbee429b5f8ff697bc8751a211201dc4b376e72044b3828cca5c" Nov 26 08:15:03 crc kubenswrapper[4909]: I1126 08:15:03.712843 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4" Nov 26 08:15:04 crc kubenswrapper[4909]: I1126 08:15:04.075528 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx"] Nov 26 08:15:04 crc kubenswrapper[4909]: I1126 08:15:04.091329 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402370-xqnvx"] Nov 26 08:15:04 crc kubenswrapper[4909]: I1126 08:15:04.507811 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8482ff-880f-453b-bc38-5578ee3fad7f" path="/var/lib/kubelet/pods/da8482ff-880f-453b-bc38-5578ee3fad7f/volumes" Nov 26 08:15:14 crc kubenswrapper[4909]: I1126 08:15:14.498437 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:15:14 crc kubenswrapper[4909]: E1126 08:15:14.499263 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:15:27 crc kubenswrapper[4909]: I1126 08:15:27.498801 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:15:27 crc kubenswrapper[4909]: E1126 08:15:27.499713 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:15:36 crc kubenswrapper[4909]: I1126 08:15:36.634630 4909 scope.go:117] "RemoveContainer" containerID="3e1a9729b8c31d1cb51530eb594a1e4c36685930196775afd19f0e82cfd9341e" Nov 26 08:15:38 crc kubenswrapper[4909]: I1126 08:15:38.505488 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:15:38 crc kubenswrapper[4909]: E1126 08:15:38.506262 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:15:51 crc kubenswrapper[4909]: I1126 08:15:51.499232 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:15:51 crc kubenswrapper[4909]: E1126 08:15:51.500115 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.621994 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58f44444cf-c4kl8"] Nov 26 08:15:57 crc kubenswrapper[4909]: E1126 08:15:57.624042 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609aeb0b-9285-419e-986d-5b3bd41468c8" containerName="collect-profiles" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.624150 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="609aeb0b-9285-419e-986d-5b3bd41468c8" containerName="collect-profiles" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.624453 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="609aeb0b-9285-419e-986d-5b3bd41468c8" containerName="collect-profiles" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.629639 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.630919 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-559b986c67-8z275"] Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.632188 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.636046 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.636284 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.636476 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-dr8wv" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.636706 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.637193 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.641706 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f44444cf-c4kl8"] Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.659555 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559b986c67-8z275"] Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.782413 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-dns-svc\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.782723 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rns6r\" (UniqueName: \"kubernetes.io/projected/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-kube-api-access-rns6r\") pod \"dnsmasq-dns-58f44444cf-c4kl8\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.782873 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fps6v\" (UniqueName: \"kubernetes.io/projected/05d77594-6cc8-4ba2-b7af-236e7362f239-kube-api-access-fps6v\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.782994 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-config\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.783139 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-config\") pod \"dnsmasq-dns-58f44444cf-c4kl8\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.826280 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559b986c67-8z275"] Nov 26 08:15:57 crc kubenswrapper[4909]: E1126 08:15:57.827025 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-fps6v], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-559b986c67-8z275" podUID="05d77594-6cc8-4ba2-b7af-236e7362f239" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.866413 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-r2cd8"] Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.867626 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.884383 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-config\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.884728 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-config\") pod \"dnsmasq-dns-58f44444cf-c4kl8\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.884910 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-dns-svc\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.885032 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rns6r\" (UniqueName: \"kubernetes.io/projected/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-kube-api-access-rns6r\") pod \"dnsmasq-dns-58f44444cf-c4kl8\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.885196 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fps6v\" (UniqueName: \"kubernetes.io/projected/05d77594-6cc8-4ba2-b7af-236e7362f239-kube-api-access-fps6v\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.885400 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-r2cd8"] Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.885623 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-config\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.885861 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-dns-svc\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.886630 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-config\") pod \"dnsmasq-dns-58f44444cf-c4kl8\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.911525 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fps6v\" (UniqueName: \"kubernetes.io/projected/05d77594-6cc8-4ba2-b7af-236e7362f239-kube-api-access-fps6v\") pod \"dnsmasq-dns-559b986c67-8z275\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.932177 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rns6r\" (UniqueName: \"kubernetes.io/projected/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-kube-api-access-rns6r\") pod \"dnsmasq-dns-58f44444cf-c4kl8\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.960499 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.986328 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-config\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.986420 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:57 crc kubenswrapper[4909]: I1126 08:15:57.986555 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khbt6\" (UniqueName: \"kubernetes.io/projected/45938321-3fef-49a2-93a5-7e9e1efa5280-kube-api-access-khbt6\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.087996 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khbt6\" (UniqueName: \"kubernetes.io/projected/45938321-3fef-49a2-93a5-7e9e1efa5280-kube-api-access-khbt6\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.088368 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-config\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.088522 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.089284 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-config\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.089326 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.126452 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khbt6\" (UniqueName: \"kubernetes.io/projected/45938321-3fef-49a2-93a5-7e9e1efa5280-kube-api-access-khbt6\") pod \"dnsmasq-dns-5d7b5456f5-r2cd8\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.137992 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.159755 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.180497 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f44444cf-c4kl8"] Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.182653 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.197772 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-fgjp7"] Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.198928 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.236490 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-fgjp7"] Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.297700 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-config\") pod \"05d77594-6cc8-4ba2-b7af-236e7362f239\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.297873 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-dns-svc\") pod \"05d77594-6cc8-4ba2-b7af-236e7362f239\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.297923 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fps6v\" (UniqueName: \"kubernetes.io/projected/05d77594-6cc8-4ba2-b7af-236e7362f239-kube-api-access-fps6v\") pod \"05d77594-6cc8-4ba2-b7af-236e7362f239\" (UID: \"05d77594-6cc8-4ba2-b7af-236e7362f239\") " Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.298145 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvjp\" (UniqueName: \"kubernetes.io/projected/a1f047da-299f-4464-a3d3-ba4311247106-kube-api-access-ssvjp\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.298202 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-config\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.298252 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.298760 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-config" (OuterVolumeSpecName: "config") pod "05d77594-6cc8-4ba2-b7af-236e7362f239" (UID: "05d77594-6cc8-4ba2-b7af-236e7362f239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.299059 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05d77594-6cc8-4ba2-b7af-236e7362f239" (UID: "05d77594-6cc8-4ba2-b7af-236e7362f239"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.302678 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d77594-6cc8-4ba2-b7af-236e7362f239-kube-api-access-fps6v" (OuterVolumeSpecName: "kube-api-access-fps6v") pod "05d77594-6cc8-4ba2-b7af-236e7362f239" (UID: "05d77594-6cc8-4ba2-b7af-236e7362f239"). InnerVolumeSpecName "kube-api-access-fps6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.399546 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-config\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.399659 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.399700 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvjp\" (UniqueName: \"kubernetes.io/projected/a1f047da-299f-4464-a3d3-ba4311247106-kube-api-access-ssvjp\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.399759 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.399769 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d77594-6cc8-4ba2-b7af-236e7362f239-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.399780 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fps6v\" (UniqueName: \"kubernetes.io/projected/05d77594-6cc8-4ba2-b7af-236e7362f239-kube-api-access-fps6v\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.401002 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-config\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.401516 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:58 crc kubenswrapper[4909]: I1126 08:15:58.419147 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvjp\" (UniqueName: \"kubernetes.io/projected/a1f047da-299f-4464-a3d3-ba4311247106-kube-api-access-ssvjp\") pod \"dnsmasq-dns-98ddfc8f-fgjp7\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:58.538066 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f44444cf-c4kl8"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:58.543787 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:58.776762 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-r2cd8"] Nov 26 08:15:59 crc kubenswrapper[4909]: W1126 08:15:58.782749 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45938321_3fef_49a2_93a5_7e9e1efa5280.slice/crio-83e10cea700babcebce29dcd0b9e8bd8d006c5350c65772f2dde2d0183bf1d0b WatchSource:0}: Error finding container 83e10cea700babcebce29dcd0b9e8bd8d006c5350c65772f2dde2d0183bf1d0b: Status 404 returned error can't find the container with id 83e10cea700babcebce29dcd0b9e8bd8d006c5350c65772f2dde2d0183bf1d0b Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.058706 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.060224 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.069022 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ww8gf" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.069278 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.069667 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.073442 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.075997 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.076412 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120179 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120259 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120313 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120363 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120387 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fa3d721-75c2-4528-bc0e-09d91be2312c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120415 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fa3d721-75c2-4528-bc0e-09d91be2312c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120463 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120493 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.120548 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxvg\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-kube-api-access-2mxvg\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.149571 4909 generic.go:334] "Generic (PLEG): container finished" podID="b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c" containerID="b38511ba5e6f1dba5f949f9bb2ed8d990bf630dc2944865c1122e3ff24418492" exitCode=0 Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.149752 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" event={"ID":"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c","Type":"ContainerDied","Data":"b38511ba5e6f1dba5f949f9bb2ed8d990bf630dc2944865c1122e3ff24418492"} Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.149790 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" event={"ID":"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c","Type":"ContainerStarted","Data":"ee62d610acf131d89a3147d55a7764d1b8f82a4779058e907ae768705ff5876c"} Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.157696 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559b986c67-8z275" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.158830 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" event={"ID":"45938321-3fef-49a2-93a5-7e9e1efa5280","Type":"ContainerStarted","Data":"83e10cea700babcebce29dcd0b9e8bd8d006c5350c65772f2dde2d0183bf1d0b"} Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221345 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221378 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fa3d721-75c2-4528-bc0e-09d91be2312c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221400 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fa3d721-75c2-4528-bc0e-09d91be2312c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221429 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221455 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221493 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxvg\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-kube-api-access-2mxvg\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221516 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221549 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221581 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.221940 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.228194 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fa3d721-75c2-4528-bc0e-09d91be2312c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.233472 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.234390 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.234876 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559b986c67-8z275"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.236550 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.238443 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.238494 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/08ac053324b34d6d39032c48aaba4ae6f840b918837e8021b9bda528bee464ee/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.243167 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-559b986c67-8z275"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.245576 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fa3d721-75c2-4528-bc0e-09d91be2312c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.248476 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.271406 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxvg\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-kube-api-access-2mxvg\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.308891 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.370631 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.375528 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.379135 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.379416 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.379529 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.379748 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h84sv" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.380257 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.412813 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.420809 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424303 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424344 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424369 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424389 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424421 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424437 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424456 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424495 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9922m\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-kube-api-access-9922m\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.424514 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.525959 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526030 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526060 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526084 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526130 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526151 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526180 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526237 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9922m\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-kube-api-access-9922m\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526278 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.526896 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.528371 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.529401 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.530131 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.537283 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.543780 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.543819 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c50c8062e852a8b18c36a8d39d22d5c69a7617341b6b523869d660f5b1d66c88/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.547773 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.550378 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.558822 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9922m\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-kube-api-access-9922m\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.586651 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-fgjp7"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.592497 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.630103 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rns6r\" (UniqueName: \"kubernetes.io/projected/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-kube-api-access-rns6r\") pod \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.630224 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-config\") pod \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\" (UID: \"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c\") " Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.630495 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.637082 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-kube-api-access-rns6r" (OuterVolumeSpecName: "kube-api-access-rns6r") pod "b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c" (UID: "b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c"). InnerVolumeSpecName "kube-api-access-rns6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.679771 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: E1126 08:15:59.680101 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c" containerName="init" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.680121 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c" containerName="init" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.681349 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-config" (OuterVolumeSpecName: "config") pod "b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c" (UID: "b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.683083 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c" containerName="init" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.684199 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.693961 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.694153 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nw6kt" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.713208 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.713689 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.734393 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f27faa88-3551-4b4c-a737-409c1ef02b7f-kolla-config\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.734496 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f27faa88-3551-4b4c-a737-409c1ef02b7f-config-data\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.734529 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whq2p\" (UniqueName: \"kubernetes.io/projected/f27faa88-3551-4b4c-a737-409c1ef02b7f-kube-api-access-whq2p\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.734581 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rns6r\" (UniqueName: \"kubernetes.io/projected/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-kube-api-access-rns6r\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.734606 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.858720 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f27faa88-3551-4b4c-a737-409c1ef02b7f-config-data\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.858776 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whq2p\" (UniqueName: \"kubernetes.io/projected/f27faa88-3551-4b4c-a737-409c1ef02b7f-kube-api-access-whq2p\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.858813 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f27faa88-3551-4b4c-a737-409c1ef02b7f-kolla-config\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.862798 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f27faa88-3551-4b4c-a737-409c1ef02b7f-config-data\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.864045 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f27faa88-3551-4b4c-a737-409c1ef02b7f-kolla-config\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.864406 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.865576 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.874503 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.874793 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xmptv" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.874520 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.877198 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.880294 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.882689 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whq2p\" (UniqueName: \"kubernetes.io/projected/f27faa88-3551-4b4c-a737-409c1ef02b7f-kube-api-access-whq2p\") pod \"memcached-0\" (UID: \"f27faa88-3551-4b4c-a737-409c1ef02b7f\") " pod="openstack/memcached-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.883019 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.884455 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.953743 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.959935 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmgqn\" (UniqueName: \"kubernetes.io/projected/3e194b71-c30a-4d1e-bc5e-acfb949134f9-kube-api-access-fmgqn\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960011 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960045 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960091 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960213 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e194b71-c30a-4d1e-bc5e-acfb949134f9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960348 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960433 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960468 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:15:59 crc kubenswrapper[4909]: I1126 08:15:59.960506 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-secrets\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.012625 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061037 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-secrets\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061105 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmgqn\" (UniqueName: \"kubernetes.io/projected/3e194b71-c30a-4d1e-bc5e-acfb949134f9-kube-api-access-fmgqn\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061154 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061178 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061208 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061256 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e194b71-c30a-4d1e-bc5e-acfb949134f9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061293 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061337 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.061364 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.062753 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.063006 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.063165 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e194b71-c30a-4d1e-bc5e-acfb949134f9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.063435 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e194b71-c30a-4d1e-bc5e-acfb949134f9-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.063938 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-secrets\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.064753 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.064813 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.064850 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ee77d65aef6de80a5a50e331e1e77e3cccf1362ba036da5bc5ff9d67e0f020d/globalmount\"" pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.065198 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e194b71-c30a-4d1e-bc5e-acfb949134f9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.082261 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmgqn\" (UniqueName: \"kubernetes.io/projected/3e194b71-c30a-4d1e-bc5e-acfb949134f9-kube-api-access-fmgqn\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.096632 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3d6ad4b-f5b6-4286-8e06-4e24e10b8ca6\") pod \"openstack-galera-0\" (UID: \"3e194b71-c30a-4d1e-bc5e-acfb949134f9\") " pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.168129 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.168131 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f44444cf-c4kl8" event={"ID":"b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c","Type":"ContainerDied","Data":"ee62d610acf131d89a3147d55a7764d1b8f82a4779058e907ae768705ff5876c"} Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.168189 4909 scope.go:117] "RemoveContainer" containerID="b38511ba5e6f1dba5f949f9bb2ed8d990bf630dc2944865c1122e3ff24418492" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.170178 4909 generic.go:334] "Generic (PLEG): container finished" podID="a1f047da-299f-4464-a3d3-ba4311247106" containerID="600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab" exitCode=0 Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.170262 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" event={"ID":"a1f047da-299f-4464-a3d3-ba4311247106","Type":"ContainerDied","Data":"600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab"} Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.170342 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" event={"ID":"a1f047da-299f-4464-a3d3-ba4311247106","Type":"ContainerStarted","Data":"99b0cbf9ff2f7cae6c065992eaa15976d5a72f889c03ee8460346d655788ca22"} Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.172430 4909 generic.go:334] "Generic (PLEG): container finished" podID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerID="ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459" exitCode=0 Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.172506 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" event={"ID":"45938321-3fef-49a2-93a5-7e9e1efa5280","Type":"ContainerDied","Data":"ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459"} Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.176485 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2fa3d721-75c2-4528-bc0e-09d91be2312c","Type":"ContainerStarted","Data":"9ee9e32045b7a7a2ee99271736380137ad5619e9f48beb068d4bb1674c8dbffb"} Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.195383 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.263618 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.277329 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f44444cf-c4kl8"] Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.282883 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58f44444cf-c4kl8"] Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.479172 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 26 08:16:00 crc kubenswrapper[4909]: W1126 08:16:00.491583 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf27faa88_3551_4b4c_a737_409c1ef02b7f.slice/crio-e37e1785e62b533e78982d398b13eb244ef94b31e3a4a2500ce6e8e6c1830126 WatchSource:0}: Error finding container e37e1785e62b533e78982d398b13eb244ef94b31e3a4a2500ce6e8e6c1830126: Status 404 returned error can't find the container with id e37e1785e62b533e78982d398b13eb244ef94b31e3a4a2500ce6e8e6c1830126 Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.509578 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d77594-6cc8-4ba2-b7af-236e7362f239" path="/var/lib/kubelet/pods/05d77594-6cc8-4ba2-b7af-236e7362f239/volumes" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.510487 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c" path="/var/lib/kubelet/pods/b9c7a8dd-8587-4c0d-8188-c8eec3b3a14c/volumes" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.518836 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 08:16:00 crc kubenswrapper[4909]: W1126 08:16:00.522291 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e194b71_c30a_4d1e_bc5e_acfb949134f9.slice/crio-9beaa92bc8badbe75397281c52639ec4e389a08bef6697f720ec5967cd9fc428 WatchSource:0}: Error finding container 9beaa92bc8badbe75397281c52639ec4e389a08bef6697f720ec5967cd9fc428: Status 404 returned error can't find the container with id 9beaa92bc8badbe75397281c52639ec4e389a08bef6697f720ec5967cd9fc428 Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.632749 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.635104 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.638577 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4x6nq" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.638736 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.639688 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.640319 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.642695 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.770754 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x89kt\" (UniqueName: \"kubernetes.io/projected/0030125a-9381-4664-9a8f-bcc4a9a812e7-kube-api-access-x89kt\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.770832 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.770878 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.770914 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.770968 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.771026 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.771056 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.771108 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0030125a-9381-4664-9a8f-bcc4a9a812e7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.771160 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.872709 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.872816 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x89kt\" (UniqueName: \"kubernetes.io/projected/0030125a-9381-4664-9a8f-bcc4a9a812e7-kube-api-access-x89kt\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.872850 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.873298 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.873334 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.873836 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.873873 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.873909 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.874333 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0030125a-9381-4664-9a8f-bcc4a9a812e7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.874519 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.874575 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.874773 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0030125a-9381-4664-9a8f-bcc4a9a812e7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.874827 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0030125a-9381-4664-9a8f-bcc4a9a812e7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.876975 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.878214 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.878719 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0030125a-9381-4664-9a8f-bcc4a9a812e7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.881555 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.881602 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be17325ae91496a2a11bd3844223080bb97c94a766e279b47822f13bb49b0eb0/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.893440 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x89kt\" (UniqueName: \"kubernetes.io/projected/0030125a-9381-4664-9a8f-bcc4a9a812e7-kube-api-access-x89kt\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.914695 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d900a0ce-0f82-4233-baa7-99b01a4168f1\") pod \"openstack-cell1-galera-0\" (UID: \"0030125a-9381-4664-9a8f-bcc4a9a812e7\") " pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:00 crc kubenswrapper[4909]: I1126 08:16:00.963184 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.188743 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" event={"ID":"a1f047da-299f-4464-a3d3-ba4311247106","Type":"ContainerStarted","Data":"7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5"} Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.189041 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.191351 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" event={"ID":"45938321-3fef-49a2-93a5-7e9e1efa5280","Type":"ContainerStarted","Data":"e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2"} Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.191521 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.193615 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f27faa88-3551-4b4c-a737-409c1ef02b7f","Type":"ContainerStarted","Data":"2932afe690e848cc15c1a95600e88a027e5befa0765eac70d96f4d95a1e6a312"} Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.193659 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f27faa88-3551-4b4c-a737-409c1ef02b7f","Type":"ContainerStarted","Data":"e37e1785e62b533e78982d398b13eb244ef94b31e3a4a2500ce6e8e6c1830126"} Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.193742 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.196914 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e194b71-c30a-4d1e-bc5e-acfb949134f9","Type":"ContainerStarted","Data":"a09308745f9093ee287dbe598bf85f6f26f3dc8c38c56320a99c79af7d3577c2"} Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.196942 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e194b71-c30a-4d1e-bc5e-acfb949134f9","Type":"ContainerStarted","Data":"9beaa92bc8badbe75397281c52639ec4e389a08bef6697f720ec5967cd9fc428"} Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.203074 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f86f4a4c-a96a-4325-91ef-0e8a6f63c913","Type":"ContainerStarted","Data":"68cef23c56316b48ebd92a3704e2563b993b8005150d4f1e034c95540b3550f2"} Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.207215 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" podStartSLOduration=3.207201466 podStartE2EDuration="3.207201466s" podCreationTimestamp="2025-11-26 08:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:01.206185478 +0000 UTC m=+4533.352396634" watchObservedRunningTime="2025-11-26 08:16:01.207201466 +0000 UTC m=+4533.353412632" Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.225118 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.225100214 podStartE2EDuration="2.225100214s" podCreationTimestamp="2025-11-26 08:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:01.222971796 +0000 UTC m=+4533.369182962" watchObservedRunningTime="2025-11-26 08:16:01.225100214 +0000 UTC m=+4533.371311380" Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.242229 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" podStartSLOduration=4.242206 podStartE2EDuration="4.242206s" podCreationTimestamp="2025-11-26 08:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:01.241343507 +0000 UTC m=+4533.387554683" watchObservedRunningTime="2025-11-26 08:16:01.242206 +0000 UTC m=+4533.388417186" Nov 26 08:16:01 crc kubenswrapper[4909]: I1126 08:16:01.485760 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 08:16:01 crc kubenswrapper[4909]: W1126 08:16:01.528920 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0030125a_9381_4664_9a8f_bcc4a9a812e7.slice/crio-f6c9499a88cfd95ea1ca2c8893c6a95eb3169ecb49553353196101d23477d4d9 WatchSource:0}: Error finding container f6c9499a88cfd95ea1ca2c8893c6a95eb3169ecb49553353196101d23477d4d9: Status 404 returned error can't find the container with id f6c9499a88cfd95ea1ca2c8893c6a95eb3169ecb49553353196101d23477d4d9 Nov 26 08:16:02 crc kubenswrapper[4909]: I1126 08:16:02.212631 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0030125a-9381-4664-9a8f-bcc4a9a812e7","Type":"ContainerStarted","Data":"66190039aed1123b225490dc0aa56614feddce51e634cb788c2dbfc6070d7574"} Nov 26 08:16:02 crc kubenswrapper[4909]: I1126 08:16:02.212916 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0030125a-9381-4664-9a8f-bcc4a9a812e7","Type":"ContainerStarted","Data":"f6c9499a88cfd95ea1ca2c8893c6a95eb3169ecb49553353196101d23477d4d9"} Nov 26 08:16:02 crc kubenswrapper[4909]: I1126 08:16:02.215735 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f86f4a4c-a96a-4325-91ef-0e8a6f63c913","Type":"ContainerStarted","Data":"b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699"} Nov 26 08:16:02 crc kubenswrapper[4909]: I1126 08:16:02.217761 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2fa3d721-75c2-4528-bc0e-09d91be2312c","Type":"ContainerStarted","Data":"c76d8a1429185de8fb034794fc4662806c86b63087f32c1c18229cde6eeaa302"} Nov 26 08:16:04 crc kubenswrapper[4909]: I1126 08:16:04.234500 4909 generic.go:334] "Generic (PLEG): container finished" podID="3e194b71-c30a-4d1e-bc5e-acfb949134f9" containerID="a09308745f9093ee287dbe598bf85f6f26f3dc8c38c56320a99c79af7d3577c2" exitCode=0 Nov 26 08:16:04 crc kubenswrapper[4909]: I1126 08:16:04.234557 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e194b71-c30a-4d1e-bc5e-acfb949134f9","Type":"ContainerDied","Data":"a09308745f9093ee287dbe598bf85f6f26f3dc8c38c56320a99c79af7d3577c2"} Nov 26 08:16:04 crc kubenswrapper[4909]: I1126 08:16:04.499266 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:16:04 crc kubenswrapper[4909]: E1126 08:16:04.499509 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:16:05 crc kubenswrapper[4909]: I1126 08:16:05.015824 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 26 08:16:05 crc kubenswrapper[4909]: I1126 08:16:05.252214 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e194b71-c30a-4d1e-bc5e-acfb949134f9","Type":"ContainerStarted","Data":"39dd8b672052ed8ff17220d4b15796fc55ce49d58c408c19a2ca178abafa5cd7"} Nov 26 08:16:06 crc kubenswrapper[4909]: I1126 08:16:06.264388 4909 generic.go:334] "Generic (PLEG): container finished" podID="0030125a-9381-4664-9a8f-bcc4a9a812e7" containerID="66190039aed1123b225490dc0aa56614feddce51e634cb788c2dbfc6070d7574" exitCode=0 Nov 26 08:16:06 crc kubenswrapper[4909]: I1126 08:16:06.264439 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0030125a-9381-4664-9a8f-bcc4a9a812e7","Type":"ContainerDied","Data":"66190039aed1123b225490dc0aa56614feddce51e634cb788c2dbfc6070d7574"} Nov 26 08:16:06 crc kubenswrapper[4909]: I1126 08:16:06.297332 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.297308402 podStartE2EDuration="8.297308402s" podCreationTimestamp="2025-11-26 08:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:05.273951849 +0000 UTC m=+4537.420163005" watchObservedRunningTime="2025-11-26 08:16:06.297308402 +0000 UTC m=+4538.443519568" Nov 26 08:16:07 crc kubenswrapper[4909]: I1126 08:16:07.277313 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0030125a-9381-4664-9a8f-bcc4a9a812e7","Type":"ContainerStarted","Data":"df5941dd3e9c364dfde9466c47f327f2839263c74f5678349b3bc7562e62a68a"} Nov 26 08:16:07 crc kubenswrapper[4909]: I1126 08:16:07.311763 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.311737611 podStartE2EDuration="8.311737611s" podCreationTimestamp="2025-11-26 08:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:07.304715779 +0000 UTC m=+4539.450926945" watchObservedRunningTime="2025-11-26 08:16:07.311737611 +0000 UTC m=+4539.457948787" Nov 26 08:16:08 crc kubenswrapper[4909]: I1126 08:16:08.185388 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:16:08 crc kubenswrapper[4909]: I1126 08:16:08.545828 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:16:08 crc kubenswrapper[4909]: I1126 08:16:08.606802 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-r2cd8"] Nov 26 08:16:08 crc kubenswrapper[4909]: I1126 08:16:08.607041 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" podUID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerName="dnsmasq-dns" containerID="cri-o://e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2" gracePeriod=10 Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.023543 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.108612 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khbt6\" (UniqueName: \"kubernetes.io/projected/45938321-3fef-49a2-93a5-7e9e1efa5280-kube-api-access-khbt6\") pod \"45938321-3fef-49a2-93a5-7e9e1efa5280\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.108739 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-config\") pod \"45938321-3fef-49a2-93a5-7e9e1efa5280\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.108836 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-dns-svc\") pod \"45938321-3fef-49a2-93a5-7e9e1efa5280\" (UID: \"45938321-3fef-49a2-93a5-7e9e1efa5280\") " Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.117014 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45938321-3fef-49a2-93a5-7e9e1efa5280-kube-api-access-khbt6" (OuterVolumeSpecName: "kube-api-access-khbt6") pod "45938321-3fef-49a2-93a5-7e9e1efa5280" (UID: "45938321-3fef-49a2-93a5-7e9e1efa5280"). InnerVolumeSpecName "kube-api-access-khbt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.144168 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45938321-3fef-49a2-93a5-7e9e1efa5280" (UID: "45938321-3fef-49a2-93a5-7e9e1efa5280"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.145947 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-config" (OuterVolumeSpecName: "config") pod "45938321-3fef-49a2-93a5-7e9e1efa5280" (UID: "45938321-3fef-49a2-93a5-7e9e1efa5280"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.210672 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.210704 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khbt6\" (UniqueName: \"kubernetes.io/projected/45938321-3fef-49a2-93a5-7e9e1efa5280-kube-api-access-khbt6\") on node \"crc\" DevicePath \"\"" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.210718 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45938321-3fef-49a2-93a5-7e9e1efa5280-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.294333 4909 generic.go:334] "Generic (PLEG): container finished" podID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerID="e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2" exitCode=0 Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.294401 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.294400 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" event={"ID":"45938321-3fef-49a2-93a5-7e9e1efa5280","Type":"ContainerDied","Data":"e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2"} Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.295110 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-r2cd8" event={"ID":"45938321-3fef-49a2-93a5-7e9e1efa5280","Type":"ContainerDied","Data":"83e10cea700babcebce29dcd0b9e8bd8d006c5350c65772f2dde2d0183bf1d0b"} Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.295147 4909 scope.go:117] "RemoveContainer" containerID="e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.315491 4909 scope.go:117] "RemoveContainer" containerID="ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.329870 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-r2cd8"] Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.335980 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-r2cd8"] Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.348652 4909 scope.go:117] "RemoveContainer" containerID="e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2" Nov 26 08:16:09 crc kubenswrapper[4909]: E1126 08:16:09.349085 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2\": container with ID starting with e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2 not found: ID does not exist" containerID="e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.349201 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2"} err="failed to get container status \"e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2\": rpc error: code = NotFound desc = could not find container \"e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2\": container with ID starting with e2e162cf3c3cfe8f69544286dff8f58679b9b1afad0592d07ddc879642d47af2 not found: ID does not exist" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.349279 4909 scope.go:117] "RemoveContainer" containerID="ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459" Nov 26 08:16:09 crc kubenswrapper[4909]: E1126 08:16:09.349831 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459\": container with ID starting with ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459 not found: ID does not exist" containerID="ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459" Nov 26 08:16:09 crc kubenswrapper[4909]: I1126 08:16:09.349862 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459"} err="failed to get container status \"ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459\": rpc error: code = NotFound desc = could not find container \"ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459\": container with ID starting with ab74b31142465bac030db81c84ce4b3c7bd861f22155bcfa597668b321528459 not found: ID does not exist" Nov 26 08:16:10 crc kubenswrapper[4909]: I1126 08:16:10.195798 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 26 08:16:10 crc kubenswrapper[4909]: I1126 08:16:10.195852 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 26 08:16:10 crc kubenswrapper[4909]: I1126 08:16:10.510464 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45938321-3fef-49a2-93a5-7e9e1efa5280" path="/var/lib/kubelet/pods/45938321-3fef-49a2-93a5-7e9e1efa5280/volumes" Nov 26 08:16:10 crc kubenswrapper[4909]: I1126 08:16:10.964399 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:10 crc kubenswrapper[4909]: I1126 08:16:10.964790 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:12 crc kubenswrapper[4909]: I1126 08:16:12.269565 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 26 08:16:12 crc kubenswrapper[4909]: I1126 08:16:12.319892 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 26 08:16:13 crc kubenswrapper[4909]: I1126 08:16:13.049683 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:13 crc kubenswrapper[4909]: I1126 08:16:13.098311 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 26 08:16:17 crc kubenswrapper[4909]: I1126 08:16:17.499049 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:16:17 crc kubenswrapper[4909]: E1126 08:16:17.500137 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:16:32 crc kubenswrapper[4909]: I1126 08:16:32.499811 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:16:32 crc kubenswrapper[4909]: E1126 08:16:32.500647 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:16:33 crc kubenswrapper[4909]: I1126 08:16:33.494715 4909 generic.go:334] "Generic (PLEG): container finished" podID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerID="c76d8a1429185de8fb034794fc4662806c86b63087f32c1c18229cde6eeaa302" exitCode=0 Nov 26 08:16:33 crc kubenswrapper[4909]: I1126 08:16:33.494788 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2fa3d721-75c2-4528-bc0e-09d91be2312c","Type":"ContainerDied","Data":"c76d8a1429185de8fb034794fc4662806c86b63087f32c1c18229cde6eeaa302"} Nov 26 08:16:34 crc kubenswrapper[4909]: I1126 08:16:34.506229 4909 generic.go:334] "Generic (PLEG): container finished" podID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerID="b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699" exitCode=0 Nov 26 08:16:34 crc kubenswrapper[4909]: I1126 08:16:34.507127 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2fa3d721-75c2-4528-bc0e-09d91be2312c","Type":"ContainerStarted","Data":"ca9e9ab2f40d964a8b364994c8216f0161588339188f76bb85a287ec6fb1bc84"} Nov 26 08:16:34 crc kubenswrapper[4909]: I1126 08:16:34.507734 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f86f4a4c-a96a-4325-91ef-0e8a6f63c913","Type":"ContainerDied","Data":"b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699"} Nov 26 08:16:34 crc kubenswrapper[4909]: I1126 08:16:34.507997 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 26 08:16:34 crc kubenswrapper[4909]: I1126 08:16:34.563809 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.563787288 podStartE2EDuration="36.563787288s" podCreationTimestamp="2025-11-26 08:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:34.556987872 +0000 UTC m=+4566.703199038" watchObservedRunningTime="2025-11-26 08:16:34.563787288 +0000 UTC m=+4566.709998454" Nov 26 08:16:35 crc kubenswrapper[4909]: I1126 08:16:35.515781 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f86f4a4c-a96a-4325-91ef-0e8a6f63c913","Type":"ContainerStarted","Data":"efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7"} Nov 26 08:16:35 crc kubenswrapper[4909]: I1126 08:16:35.516277 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:16:35 crc kubenswrapper[4909]: I1126 08:16:35.539740 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.539721618 podStartE2EDuration="37.539721618s" podCreationTimestamp="2025-11-26 08:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:35.534727401 +0000 UTC m=+4567.680938577" watchObservedRunningTime="2025-11-26 08:16:35.539721618 +0000 UTC m=+4567.685932784" Nov 26 08:16:47 crc kubenswrapper[4909]: I1126 08:16:47.499136 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:16:48 crc kubenswrapper[4909]: I1126 08:16:48.635474 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"1d02eb9ff541f068efba31560f6d49bd8fa1db4909c006e87655b88c2586e0c0"} Nov 26 08:16:49 crc kubenswrapper[4909]: I1126 08:16:49.424810 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 26 08:16:49 crc kubenswrapper[4909]: I1126 08:16:49.716723 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.635226 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-fpkg5"] Nov 26 08:16:52 crc kubenswrapper[4909]: E1126 08:16:52.636168 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerName="dnsmasq-dns" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.636183 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerName="dnsmasq-dns" Nov 26 08:16:52 crc kubenswrapper[4909]: E1126 08:16:52.636232 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerName="init" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.636239 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerName="init" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.636447 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="45938321-3fef-49a2-93a5-7e9e1efa5280" containerName="dnsmasq-dns" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.637481 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.643793 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-fpkg5"] Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.692235 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.692274 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctcb\" (UniqueName: \"kubernetes.io/projected/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-kube-api-access-lctcb\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.692306 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-config\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.794352 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.794433 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lctcb\" (UniqueName: \"kubernetes.io/projected/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-kube-api-access-lctcb\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.794494 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-config\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.795542 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-config\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.795549 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.822390 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lctcb\" (UniqueName: \"kubernetes.io/projected/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-kube-api-access-lctcb\") pod \"dnsmasq-dns-5b7946d7b9-fpkg5\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:52 crc kubenswrapper[4909]: I1126 08:16:52.993374 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:53 crc kubenswrapper[4909]: I1126 08:16:53.364933 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:16:53 crc kubenswrapper[4909]: I1126 08:16:53.426052 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-fpkg5"] Nov 26 08:16:53 crc kubenswrapper[4909]: W1126 08:16:53.441674 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8b29f5_e339_4947_a7b5_a68d6dfaced6.slice/crio-f6fb6eb47d2cc943664a86346151f343055d8b5e5f1e928601bf8646d30a924a WatchSource:0}: Error finding container f6fb6eb47d2cc943664a86346151f343055d8b5e5f1e928601bf8646d30a924a: Status 404 returned error can't find the container with id f6fb6eb47d2cc943664a86346151f343055d8b5e5f1e928601bf8646d30a924a Nov 26 08:16:53 crc kubenswrapper[4909]: I1126 08:16:53.693022 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" event={"ID":"0f8b29f5-e339-4947-a7b5-a68d6dfaced6","Type":"ContainerStarted","Data":"f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73"} Nov 26 08:16:53 crc kubenswrapper[4909]: I1126 08:16:53.693357 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" event={"ID":"0f8b29f5-e339-4947-a7b5-a68d6dfaced6","Type":"ContainerStarted","Data":"f6fb6eb47d2cc943664a86346151f343055d8b5e5f1e928601bf8646d30a924a"} Nov 26 08:16:53 crc kubenswrapper[4909]: I1126 08:16:53.919699 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:16:54 crc kubenswrapper[4909]: I1126 08:16:54.702537 4909 generic.go:334] "Generic (PLEG): container finished" podID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerID="f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73" exitCode=0 Nov 26 08:16:54 crc kubenswrapper[4909]: I1126 08:16:54.702613 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" event={"ID":"0f8b29f5-e339-4947-a7b5-a68d6dfaced6","Type":"ContainerDied","Data":"f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73"} Nov 26 08:16:55 crc kubenswrapper[4909]: I1126 08:16:55.339313 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerName="rabbitmq" containerID="cri-o://ca9e9ab2f40d964a8b364994c8216f0161588339188f76bb85a287ec6fb1bc84" gracePeriod=604799 Nov 26 08:16:55 crc kubenswrapper[4909]: I1126 08:16:55.712895 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" event={"ID":"0f8b29f5-e339-4947-a7b5-a68d6dfaced6","Type":"ContainerStarted","Data":"43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2"} Nov 26 08:16:55 crc kubenswrapper[4909]: I1126 08:16:55.713091 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:16:55 crc kubenswrapper[4909]: I1126 08:16:55.739332 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" podStartSLOduration=3.739306665 podStartE2EDuration="3.739306665s" podCreationTimestamp="2025-11-26 08:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:16:55.730260988 +0000 UTC m=+4587.876472154" watchObservedRunningTime="2025-11-26 08:16:55.739306665 +0000 UTC m=+4587.885517841" Nov 26 08:16:55 crc kubenswrapper[4909]: I1126 08:16:55.759632 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerName="rabbitmq" containerID="cri-o://efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7" gracePeriod=604799 Nov 26 08:16:59 crc kubenswrapper[4909]: I1126 08:16:59.421531 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.253:5672: connect: connection refused" Nov 26 08:16:59 crc kubenswrapper[4909]: I1126 08:16:59.715084 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.254:5672: connect: connection refused" Nov 26 08:17:01 crc kubenswrapper[4909]: I1126 08:17:01.766463 4909 generic.go:334] "Generic (PLEG): container finished" podID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerID="ca9e9ab2f40d964a8b364994c8216f0161588339188f76bb85a287ec6fb1bc84" exitCode=0 Nov 26 08:17:01 crc kubenswrapper[4909]: I1126 08:17:01.766562 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2fa3d721-75c2-4528-bc0e-09d91be2312c","Type":"ContainerDied","Data":"ca9e9ab2f40d964a8b364994c8216f0161588339188f76bb85a287ec6fb1bc84"} Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.605299 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.610195 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.670682 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-erlang-cookie-secret\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.670753 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-pod-info\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.670802 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mxvg\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-kube-api-access-2mxvg\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.670828 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9922m\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-kube-api-access-9922m\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.670853 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fa3d721-75c2-4528-bc0e-09d91be2312c-pod-info\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.670888 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-server-conf\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671068 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671105 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fa3d721-75c2-4528-bc0e-09d91be2312c-erlang-cookie-secret\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671129 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-plugins-conf\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671156 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-erlang-cookie\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671399 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-erlang-cookie\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671428 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-plugins-conf\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671453 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-plugins\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671537 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671623 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-confd\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671657 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-server-conf\") pod \"2fa3d721-75c2-4528-bc0e-09d91be2312c\" (UID: \"2fa3d721-75c2-4528-bc0e-09d91be2312c\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671712 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-plugins\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.671750 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-confd\") pod \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\" (UID: \"f86f4a4c-a96a-4325-91ef-0e8a6f63c913\") " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.674731 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.675311 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.676230 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.677097 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.681192 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.683185 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2fa3d721-75c2-4528-bc0e-09d91be2312c-pod-info" (OuterVolumeSpecName: "pod-info") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.691454 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.694695 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-kube-api-access-9922m" (OuterVolumeSpecName: "kube-api-access-9922m") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "kube-api-access-9922m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.722233 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa3d721-75c2-4528-bc0e-09d91be2312c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.722731 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-pod-info" (OuterVolumeSpecName: "pod-info") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.722935 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-kube-api-access-2mxvg" (OuterVolumeSpecName: "kube-api-access-2mxvg") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "kube-api-access-2mxvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.722986 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.723407 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c" (OuterVolumeSpecName: "persistence") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.723099 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707" (OuterVolumeSpecName: "persistence") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "pvc-41841468-a0d1-4f73-b69c-06616cb18707". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.730855 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-server-conf" (OuterVolumeSpecName: "server-conf") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.742284 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-server-conf" (OuterVolumeSpecName: "server-conf") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773523 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773569 4909 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773584 4909 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-pod-info\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773615 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mxvg\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-kube-api-access-2mxvg\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773630 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9922m\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-kube-api-access-9922m\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773642 4909 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fa3d721-75c2-4528-bc0e-09d91be2312c-pod-info\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773653 4909 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-server-conf\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773686 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") on node \"crc\" " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773701 4909 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fa3d721-75c2-4528-bc0e-09d91be2312c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773715 4909 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773726 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773739 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773750 4909 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773762 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773783 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") on node \"crc\" " Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.773795 4909 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fa3d721-75c2-4528-bc0e-09d91be2312c-server-conf\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.787133 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2fa3d721-75c2-4528-bc0e-09d91be2312c","Type":"ContainerDied","Data":"9ee9e32045b7a7a2ee99271736380137ad5619e9f48beb068d4bb1674c8dbffb"} Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.787191 4909 scope.go:117] "RemoveContainer" containerID="ca9e9ab2f40d964a8b364994c8216f0161588339188f76bb85a287ec6fb1bc84" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.787337 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.802953 4909 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.803538 4909 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41841468-a0d1-4f73-b69c-06616cb18707" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707") on node "crc" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.807247 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2fa3d721-75c2-4528-bc0e-09d91be2312c" (UID: "2fa3d721-75c2-4528-bc0e-09d91be2312c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.808287 4909 generic.go:334] "Generic (PLEG): container finished" podID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerID="efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7" exitCode=0 Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.808340 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f86f4a4c-a96a-4325-91ef-0e8a6f63c913","Type":"ContainerDied","Data":"efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7"} Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.808371 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f86f4a4c-a96a-4325-91ef-0e8a6f63c913","Type":"ContainerDied","Data":"68cef23c56316b48ebd92a3704e2563b993b8005150d4f1e034c95540b3550f2"} Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.808446 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.825852 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f86f4a4c-a96a-4325-91ef-0e8a6f63c913" (UID: "f86f4a4c-a96a-4325-91ef-0e8a6f63c913"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.825941 4909 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.826104 4909 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c") on node "crc" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.827751 4909 scope.go:117] "RemoveContainer" containerID="c76d8a1429185de8fb034794fc4662806c86b63087f32c1c18229cde6eeaa302" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.879563 4909 reconciler_common.go:293] "Volume detached for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.879629 4909 reconciler_common.go:293] "Volume detached for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.879648 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fa3d721-75c2-4528-bc0e-09d91be2312c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.879659 4909 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f86f4a4c-a96a-4325-91ef-0e8a6f63c913-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:02 crc kubenswrapper[4909]: I1126 08:17:02.994780 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.044276 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-fgjp7"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.044566 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" podUID="a1f047da-299f-4464-a3d3-ba4311247106" containerName="dnsmasq-dns" containerID="cri-o://7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5" gracePeriod=10 Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.122175 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.128259 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.151046 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.151427 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerName="setup-container" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.151449 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerName="setup-container" Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.151472 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerName="rabbitmq" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.151481 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerName="rabbitmq" Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.151506 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerName="rabbitmq" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.151514 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerName="rabbitmq" Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.151534 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerName="setup-container" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.151542 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerName="setup-container" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.151793 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" containerName="rabbitmq" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.151830 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" containerName="rabbitmq" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.152905 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.162005 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.162898 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.162999 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ww8gf" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.163132 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.163287 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.164025 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.175453 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.181116 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185291 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185364 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24nng\" (UniqueName: \"kubernetes.io/projected/b783ab6a-d590-4bf8-b577-aa676da17499-kube-api-access-24nng\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185406 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b783ab6a-d590-4bf8-b577-aa676da17499-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185464 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b783ab6a-d590-4bf8-b577-aa676da17499-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185499 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b783ab6a-d590-4bf8-b577-aa676da17499-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185542 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185582 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b783ab6a-d590-4bf8-b577-aa676da17499-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185667 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.185951 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.193714 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.199887 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.207019 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.207081 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.207026 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.207395 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.207629 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h84sv" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.228173 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.249605 4909 scope.go:117] "RemoveContainer" containerID="efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.277885 4909 scope.go:117] "RemoveContainer" containerID="b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289185 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289252 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289297 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24nng\" (UniqueName: \"kubernetes.io/projected/b783ab6a-d590-4bf8-b577-aa676da17499-kube-api-access-24nng\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289332 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b783ab6a-d590-4bf8-b577-aa676da17499-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289386 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b783ab6a-d590-4bf8-b577-aa676da17499-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289421 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b783ab6a-d590-4bf8-b577-aa676da17499-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289458 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289492 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b783ab6a-d590-4bf8-b577-aa676da17499-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.289534 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.290095 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.290338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.291474 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b783ab6a-d590-4bf8-b577-aa676da17499-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.291710 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b783ab6a-d590-4bf8-b577-aa676da17499-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.295503 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b783ab6a-d590-4bf8-b577-aa676da17499-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.298786 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b783ab6a-d590-4bf8-b577-aa676da17499-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.299332 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b783ab6a-d590-4bf8-b577-aa676da17499-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.303945 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.303988 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/08ac053324b34d6d39032c48aaba4ae6f840b918837e8021b9bda528bee464ee/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.316393 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24nng\" (UniqueName: \"kubernetes.io/projected/b783ab6a-d590-4bf8-b577-aa676da17499-kube-api-access-24nng\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.321797 4909 scope.go:117] "RemoveContainer" containerID="efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7" Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.322308 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7\": container with ID starting with efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7 not found: ID does not exist" containerID="efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.322346 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7"} err="failed to get container status \"efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7\": rpc error: code = NotFound desc = could not find container \"efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7\": container with ID starting with efc34e7d81d13b94e4503e7d2cf918d0a4bd300913eec7cc99d85920dac018d7 not found: ID does not exist" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.322371 4909 scope.go:117] "RemoveContainer" containerID="b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699" Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.323189 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699\": container with ID starting with b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699 not found: ID does not exist" containerID="b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.323222 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699"} err="failed to get container status \"b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699\": rpc error: code = NotFound desc = could not find container \"b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699\": container with ID starting with b10e8fc658122108e71b0fe91488b7ec44417f75897302fcee97b6d13a1ff699 not found: ID does not exist" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.351515 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122e1b10-07f5-45bc-9609-f9f19da7e69c\") pod \"rabbitmq-server-0\" (UID: \"b783ab6a-d590-4bf8-b577-aa676da17499\") " pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.391043 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.391101 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5d5\" (UniqueName: \"kubernetes.io/projected/3484134b-9037-4281-8f33-b61c0fcc4337-kube-api-access-bf5d5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.391543 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3484134b-9037-4281-8f33-b61c0fcc4337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.391876 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.391907 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3484134b-9037-4281-8f33-b61c0fcc4337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.391963 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.392016 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3484134b-9037-4281-8f33-b61c0fcc4337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.392117 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.392914 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3484134b-9037-4281-8f33-b61c0fcc4337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.490174 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.494805 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.494878 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5d5\" (UniqueName: \"kubernetes.io/projected/3484134b-9037-4281-8f33-b61c0fcc4337-kube-api-access-bf5d5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.494911 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3484134b-9037-4281-8f33-b61c0fcc4337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.494940 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.494963 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3484134b-9037-4281-8f33-b61c0fcc4337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.494987 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.495031 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3484134b-9037-4281-8f33-b61c0fcc4337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.495090 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.495117 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3484134b-9037-4281-8f33-b61c0fcc4337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.496155 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.496581 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3484134b-9037-4281-8f33-b61c0fcc4337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.496584 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3484134b-9037-4281-8f33-b61c0fcc4337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.496835 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.499526 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.499563 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3484134b-9037-4281-8f33-b61c0fcc4337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.499574 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c50c8062e852a8b18c36a8d39d22d5c69a7617341b6b523869d660f5b1d66c88/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.501526 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3484134b-9037-4281-8f33-b61c0fcc4337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.502047 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3484134b-9037-4281-8f33-b61c0fcc4337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.514703 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5d5\" (UniqueName: \"kubernetes.io/projected/3484134b-9037-4281-8f33-b61c0fcc4337-kube-api-access-bf5d5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.570413 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41841468-a0d1-4f73-b69c-06616cb18707\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41841468-a0d1-4f73-b69c-06616cb18707\") pod \"rabbitmq-cell1-server-0\" (UID: \"3484134b-9037-4281-8f33-b61c0fcc4337\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.809195 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.823908 4909 generic.go:334] "Generic (PLEG): container finished" podID="a1f047da-299f-4464-a3d3-ba4311247106" containerID="7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5" exitCode=0 Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.823944 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.823987 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.823959 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" event={"ID":"a1f047da-299f-4464-a3d3-ba4311247106","Type":"ContainerDied","Data":"7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5"} Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.824087 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" event={"ID":"a1f047da-299f-4464-a3d3-ba4311247106","Type":"ContainerDied","Data":"99b0cbf9ff2f7cae6c065992eaa15976d5a72f889c03ee8460346d655788ca22"} Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.824111 4909 scope.go:117] "RemoveContainer" containerID="7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.852580 4909 scope.go:117] "RemoveContainer" containerID="600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.884709 4909 scope.go:117] "RemoveContainer" containerID="7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5" Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.885304 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5\": container with ID starting with 7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5 not found: ID does not exist" containerID="7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.885368 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5"} err="failed to get container status \"7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5\": rpc error: code = NotFound desc = could not find container \"7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5\": container with ID starting with 7cca732c895a897a4ac8a869a6efad5c76bb45144ab741a94939afcc06579cf5 not found: ID does not exist" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.885403 4909 scope.go:117] "RemoveContainer" containerID="600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab" Nov 26 08:17:03 crc kubenswrapper[4909]: E1126 08:17:03.885810 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab\": container with ID starting with 600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab not found: ID does not exist" containerID="600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.885841 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab"} err="failed to get container status \"600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab\": rpc error: code = NotFound desc = could not find container \"600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab\": container with ID starting with 600e046d4b374921aa30a5378c94c186fad91dcd41e2d52bbc872747540160ab not found: ID does not exist" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.901122 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-dns-svc\") pod \"a1f047da-299f-4464-a3d3-ba4311247106\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.901175 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssvjp\" (UniqueName: \"kubernetes.io/projected/a1f047da-299f-4464-a3d3-ba4311247106-kube-api-access-ssvjp\") pod \"a1f047da-299f-4464-a3d3-ba4311247106\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.901299 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-config\") pod \"a1f047da-299f-4464-a3d3-ba4311247106\" (UID: \"a1f047da-299f-4464-a3d3-ba4311247106\") " Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.905619 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f047da-299f-4464-a3d3-ba4311247106-kube-api-access-ssvjp" (OuterVolumeSpecName: "kube-api-access-ssvjp") pod "a1f047da-299f-4464-a3d3-ba4311247106" (UID: "a1f047da-299f-4464-a3d3-ba4311247106"). InnerVolumeSpecName "kube-api-access-ssvjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.943245 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1f047da-299f-4464-a3d3-ba4311247106" (UID: "a1f047da-299f-4464-a3d3-ba4311247106"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.946051 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-config" (OuterVolumeSpecName: "config") pod "a1f047da-299f-4464-a3d3-ba4311247106" (UID: "a1f047da-299f-4464-a3d3-ba4311247106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:17:03 crc kubenswrapper[4909]: W1126 08:17:03.966998 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb783ab6a_d590_4bf8_b577_aa676da17499.slice/crio-18bbddb6eae3ee53d9ee479f5417ed586873eabe4bd57f980f1aa2936282bc34 WatchSource:0}: Error finding container 18bbddb6eae3ee53d9ee479f5417ed586873eabe4bd57f980f1aa2936282bc34: Status 404 returned error can't find the container with id 18bbddb6eae3ee53d9ee479f5417ed586873eabe4bd57f980f1aa2936282bc34 Nov 26 08:17:03 crc kubenswrapper[4909]: I1126 08:17:03.967506 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.003471 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.003501 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssvjp\" (UniqueName: \"kubernetes.io/projected/a1f047da-299f-4464-a3d3-ba4311247106-kube-api-access-ssvjp\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.003512 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f047da-299f-4464-a3d3-ba4311247106-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.161656 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-fgjp7"] Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.170887 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-fgjp7"] Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.274557 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 08:17:04 crc kubenswrapper[4909]: W1126 08:17:04.275494 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3484134b_9037_4281_8f33_b61c0fcc4337.slice/crio-ed494bd1c4b2ab52a63a6c076822fb05cb7b9e54e9638e49a1eb37f039d82396 WatchSource:0}: Error finding container ed494bd1c4b2ab52a63a6c076822fb05cb7b9e54e9638e49a1eb37f039d82396: Status 404 returned error can't find the container with id ed494bd1c4b2ab52a63a6c076822fb05cb7b9e54e9638e49a1eb37f039d82396 Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.512256 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa3d721-75c2-4528-bc0e-09d91be2312c" path="/var/lib/kubelet/pods/2fa3d721-75c2-4528-bc0e-09d91be2312c/volumes" Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.513120 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f047da-299f-4464-a3d3-ba4311247106" path="/var/lib/kubelet/pods/a1f047da-299f-4464-a3d3-ba4311247106/volumes" Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.514878 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f86f4a4c-a96a-4325-91ef-0e8a6f63c913" path="/var/lib/kubelet/pods/f86f4a4c-a96a-4325-91ef-0e8a6f63c913/volumes" Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.832669 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3484134b-9037-4281-8f33-b61c0fcc4337","Type":"ContainerStarted","Data":"ed494bd1c4b2ab52a63a6c076822fb05cb7b9e54e9638e49a1eb37f039d82396"} Nov 26 08:17:04 crc kubenswrapper[4909]: I1126 08:17:04.834039 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b783ab6a-d590-4bf8-b577-aa676da17499","Type":"ContainerStarted","Data":"18bbddb6eae3ee53d9ee479f5417ed586873eabe4bd57f980f1aa2936282bc34"} Nov 26 08:17:05 crc kubenswrapper[4909]: I1126 08:17:05.848024 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b783ab6a-d590-4bf8-b577-aa676da17499","Type":"ContainerStarted","Data":"a339caed82dead38d8a0c2b65c5975a0badf5ef502f069c180c00cbb49972b75"} Nov 26 08:17:05 crc kubenswrapper[4909]: I1126 08:17:05.850331 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3484134b-9037-4281-8f33-b61c0fcc4337","Type":"ContainerStarted","Data":"3738a117ab490cf968d9e64d200b9a14b344000dc0bb18c6bc3f986b78cffa2a"} Nov 26 08:17:08 crc kubenswrapper[4909]: I1126 08:17:08.546715 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-98ddfc8f-fgjp7" podUID="a1f047da-299f-4464-a3d3-ba4311247106" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.252:5353: i/o timeout" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.646763 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2zwq4"] Nov 26 08:17:12 crc kubenswrapper[4909]: E1126 08:17:12.647727 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f047da-299f-4464-a3d3-ba4311247106" containerName="dnsmasq-dns" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.647750 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f047da-299f-4464-a3d3-ba4311247106" containerName="dnsmasq-dns" Nov 26 08:17:12 crc kubenswrapper[4909]: E1126 08:17:12.647774 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f047da-299f-4464-a3d3-ba4311247106" containerName="init" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.647789 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f047da-299f-4464-a3d3-ba4311247106" containerName="init" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.648018 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f047da-299f-4464-a3d3-ba4311247106" containerName="dnsmasq-dns" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.649953 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.654391 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zwq4"] Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.781894 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvwd\" (UniqueName: \"kubernetes.io/projected/002debfd-4eaa-47cc-ac84-2195a73504da-kube-api-access-lwvwd\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.782172 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-catalog-content\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.782205 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-utilities\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.883830 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwvwd\" (UniqueName: \"kubernetes.io/projected/002debfd-4eaa-47cc-ac84-2195a73504da-kube-api-access-lwvwd\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.883889 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-catalog-content\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.883929 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-utilities\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.884380 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-catalog-content\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.884404 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-utilities\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.918042 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwvwd\" (UniqueName: \"kubernetes.io/projected/002debfd-4eaa-47cc-ac84-2195a73504da-kube-api-access-lwvwd\") pod \"certified-operators-2zwq4\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:12 crc kubenswrapper[4909]: I1126 08:17:12.974105 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:13 crc kubenswrapper[4909]: I1126 08:17:13.499285 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zwq4"] Nov 26 08:17:13 crc kubenswrapper[4909]: W1126 08:17:13.506198 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod002debfd_4eaa_47cc_ac84_2195a73504da.slice/crio-6f862ce5ea79bf9e5e732ad28be95eca87cf8f3083801ea60e41a203259a1e37 WatchSource:0}: Error finding container 6f862ce5ea79bf9e5e732ad28be95eca87cf8f3083801ea60e41a203259a1e37: Status 404 returned error can't find the container with id 6f862ce5ea79bf9e5e732ad28be95eca87cf8f3083801ea60e41a203259a1e37 Nov 26 08:17:13 crc kubenswrapper[4909]: I1126 08:17:13.942716 4909 generic.go:334] "Generic (PLEG): container finished" podID="002debfd-4eaa-47cc-ac84-2195a73504da" containerID="b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38" exitCode=0 Nov 26 08:17:13 crc kubenswrapper[4909]: I1126 08:17:13.942767 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zwq4" event={"ID":"002debfd-4eaa-47cc-ac84-2195a73504da","Type":"ContainerDied","Data":"b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38"} Nov 26 08:17:13 crc kubenswrapper[4909]: I1126 08:17:13.942805 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zwq4" event={"ID":"002debfd-4eaa-47cc-ac84-2195a73504da","Type":"ContainerStarted","Data":"6f862ce5ea79bf9e5e732ad28be95eca87cf8f3083801ea60e41a203259a1e37"} Nov 26 08:17:14 crc kubenswrapper[4909]: I1126 08:17:14.952180 4909 generic.go:334] "Generic (PLEG): container finished" podID="002debfd-4eaa-47cc-ac84-2195a73504da" containerID="c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542" exitCode=0 Nov 26 08:17:14 crc kubenswrapper[4909]: I1126 08:17:14.952231 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zwq4" event={"ID":"002debfd-4eaa-47cc-ac84-2195a73504da","Type":"ContainerDied","Data":"c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542"} Nov 26 08:17:15 crc kubenswrapper[4909]: I1126 08:17:15.962381 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zwq4" event={"ID":"002debfd-4eaa-47cc-ac84-2195a73504da","Type":"ContainerStarted","Data":"f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4"} Nov 26 08:17:15 crc kubenswrapper[4909]: I1126 08:17:15.986494 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2zwq4" podStartSLOduration=2.5529422 podStartE2EDuration="3.986475827s" podCreationTimestamp="2025-11-26 08:17:12 +0000 UTC" firstStartedPulling="2025-11-26 08:17:13.945947901 +0000 UTC m=+4606.092159067" lastFinishedPulling="2025-11-26 08:17:15.379481528 +0000 UTC m=+4607.525692694" observedRunningTime="2025-11-26 08:17:15.982342205 +0000 UTC m=+4608.128553371" watchObservedRunningTime="2025-11-26 08:17:15.986475827 +0000 UTC m=+4608.132687003" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.032278 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4gqn"] Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.034408 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.055985 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4gqn"] Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.184633 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-catalog-content\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.184958 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-utilities\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.185117 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgxds\" (UniqueName: \"kubernetes.io/projected/abb9f962-bf59-4565-b758-e668396a29e8-kube-api-access-dgxds\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.286897 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-catalog-content\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.287333 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-utilities\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.287449 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgxds\" (UniqueName: \"kubernetes.io/projected/abb9f962-bf59-4565-b758-e668396a29e8-kube-api-access-dgxds\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.287467 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-catalog-content\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.287869 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-utilities\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.310379 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgxds\" (UniqueName: \"kubernetes.io/projected/abb9f962-bf59-4565-b758-e668396a29e8-kube-api-access-dgxds\") pod \"redhat-operators-g4gqn\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.363280 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.793482 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4gqn"] Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.994332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerStarted","Data":"b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef"} Nov 26 08:17:19 crc kubenswrapper[4909]: I1126 08:17:19.994373 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerStarted","Data":"908e081c6927f106585f18c3fee980b35cd648fdf60d7c8c787f01ae643d9d07"} Nov 26 08:17:21 crc kubenswrapper[4909]: I1126 08:17:21.009103 4909 generic.go:334] "Generic (PLEG): container finished" podID="abb9f962-bf59-4565-b758-e668396a29e8" containerID="b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef" exitCode=0 Nov 26 08:17:21 crc kubenswrapper[4909]: I1126 08:17:21.009151 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerDied","Data":"b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef"} Nov 26 08:17:22 crc kubenswrapper[4909]: I1126 08:17:22.020813 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerStarted","Data":"86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701"} Nov 26 08:17:22 crc kubenswrapper[4909]: I1126 08:17:22.974312 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:22 crc kubenswrapper[4909]: I1126 08:17:22.974838 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:23 crc kubenswrapper[4909]: I1126 08:17:23.031017 4909 generic.go:334] "Generic (PLEG): container finished" podID="abb9f962-bf59-4565-b758-e668396a29e8" containerID="86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701" exitCode=0 Nov 26 08:17:23 crc kubenswrapper[4909]: I1126 08:17:23.031062 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerDied","Data":"86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701"} Nov 26 08:17:23 crc kubenswrapper[4909]: I1126 08:17:23.037396 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:23 crc kubenswrapper[4909]: I1126 08:17:23.088755 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:24 crc kubenswrapper[4909]: I1126 08:17:24.042769 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerStarted","Data":"17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a"} Nov 26 08:17:24 crc kubenswrapper[4909]: I1126 08:17:24.067491 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4gqn" podStartSLOduration=3.578357924 podStartE2EDuration="6.067466772s" podCreationTimestamp="2025-11-26 08:17:18 +0000 UTC" firstStartedPulling="2025-11-26 08:17:21.010935753 +0000 UTC m=+4613.157146919" lastFinishedPulling="2025-11-26 08:17:23.500044591 +0000 UTC m=+4615.646255767" observedRunningTime="2025-11-26 08:17:24.059441373 +0000 UTC m=+4616.205652539" watchObservedRunningTime="2025-11-26 08:17:24.067466772 +0000 UTC m=+4616.213677938" Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.413153 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zwq4"] Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.413837 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2zwq4" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="registry-server" containerID="cri-o://f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4" gracePeriod=2 Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.826641 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.893387 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-catalog-content\") pod \"002debfd-4eaa-47cc-ac84-2195a73504da\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.893479 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvwd\" (UniqueName: \"kubernetes.io/projected/002debfd-4eaa-47cc-ac84-2195a73504da-kube-api-access-lwvwd\") pod \"002debfd-4eaa-47cc-ac84-2195a73504da\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.893661 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-utilities\") pod \"002debfd-4eaa-47cc-ac84-2195a73504da\" (UID: \"002debfd-4eaa-47cc-ac84-2195a73504da\") " Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.894529 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-utilities" (OuterVolumeSpecName: "utilities") pod "002debfd-4eaa-47cc-ac84-2195a73504da" (UID: "002debfd-4eaa-47cc-ac84-2195a73504da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.901037 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/002debfd-4eaa-47cc-ac84-2195a73504da-kube-api-access-lwvwd" (OuterVolumeSpecName: "kube-api-access-lwvwd") pod "002debfd-4eaa-47cc-ac84-2195a73504da" (UID: "002debfd-4eaa-47cc-ac84-2195a73504da"). InnerVolumeSpecName "kube-api-access-lwvwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.943368 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "002debfd-4eaa-47cc-ac84-2195a73504da" (UID: "002debfd-4eaa-47cc-ac84-2195a73504da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.995329 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.995367 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwvwd\" (UniqueName: \"kubernetes.io/projected/002debfd-4eaa-47cc-ac84-2195a73504da-kube-api-access-lwvwd\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:25 crc kubenswrapper[4909]: I1126 08:17:25.995377 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002debfd-4eaa-47cc-ac84-2195a73504da-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.063801 4909 generic.go:334] "Generic (PLEG): container finished" podID="002debfd-4eaa-47cc-ac84-2195a73504da" containerID="f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4" exitCode=0 Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.063879 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zwq4" event={"ID":"002debfd-4eaa-47cc-ac84-2195a73504da","Type":"ContainerDied","Data":"f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4"} Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.063944 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zwq4" event={"ID":"002debfd-4eaa-47cc-ac84-2195a73504da","Type":"ContainerDied","Data":"6f862ce5ea79bf9e5e732ad28be95eca87cf8f3083801ea60e41a203259a1e37"} Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.063966 4909 scope.go:117] "RemoveContainer" containerID="f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.063895 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zwq4" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.085139 4909 scope.go:117] "RemoveContainer" containerID="c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.096794 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zwq4"] Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.103837 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2zwq4"] Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.130093 4909 scope.go:117] "RemoveContainer" containerID="b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.148725 4909 scope.go:117] "RemoveContainer" containerID="f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4" Nov 26 08:17:26 crc kubenswrapper[4909]: E1126 08:17:26.149196 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4\": container with ID starting with f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4 not found: ID does not exist" containerID="f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.149251 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4"} err="failed to get container status \"f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4\": rpc error: code = NotFound desc = could not find container \"f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4\": container with ID starting with f54605dede11f8011e000d6211400ca87e3e8ab0fa5230b83e916685854401b4 not found: ID does not exist" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.149281 4909 scope.go:117] "RemoveContainer" containerID="c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542" Nov 26 08:17:26 crc kubenswrapper[4909]: E1126 08:17:26.149563 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542\": container with ID starting with c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542 not found: ID does not exist" containerID="c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.149630 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542"} err="failed to get container status \"c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542\": rpc error: code = NotFound desc = could not find container \"c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542\": container with ID starting with c3136a065d5f91d1fc123c4bccb45ccc99ae42adb0f937785166e85dcebab542 not found: ID does not exist" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.149649 4909 scope.go:117] "RemoveContainer" containerID="b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38" Nov 26 08:17:26 crc kubenswrapper[4909]: E1126 08:17:26.149942 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38\": container with ID starting with b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38 not found: ID does not exist" containerID="b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.149992 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38"} err="failed to get container status \"b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38\": rpc error: code = NotFound desc = could not find container \"b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38\": container with ID starting with b3aec67ed2737cf4052f972a96dcf8ec6375648902a1fd0709df9d3ea9038d38 not found: ID does not exist" Nov 26 08:17:26 crc kubenswrapper[4909]: I1126 08:17:26.509478 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" path="/var/lib/kubelet/pods/002debfd-4eaa-47cc-ac84-2195a73504da/volumes" Nov 26 08:17:29 crc kubenswrapper[4909]: I1126 08:17:29.363810 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:29 crc kubenswrapper[4909]: I1126 08:17:29.364882 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:29 crc kubenswrapper[4909]: I1126 08:17:29.421974 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:30 crc kubenswrapper[4909]: I1126 08:17:30.172324 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:30 crc kubenswrapper[4909]: I1126 08:17:30.410624 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4gqn"] Nov 26 08:17:32 crc kubenswrapper[4909]: I1126 08:17:32.145990 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4gqn" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="registry-server" containerID="cri-o://17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a" gracePeriod=2 Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.056638 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.112446 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-utilities\") pod \"abb9f962-bf59-4565-b758-e668396a29e8\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.112689 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-catalog-content\") pod \"abb9f962-bf59-4565-b758-e668396a29e8\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.112724 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgxds\" (UniqueName: \"kubernetes.io/projected/abb9f962-bf59-4565-b758-e668396a29e8-kube-api-access-dgxds\") pod \"abb9f962-bf59-4565-b758-e668396a29e8\" (UID: \"abb9f962-bf59-4565-b758-e668396a29e8\") " Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.113630 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-utilities" (OuterVolumeSpecName: "utilities") pod "abb9f962-bf59-4565-b758-e668396a29e8" (UID: "abb9f962-bf59-4565-b758-e668396a29e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.118426 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb9f962-bf59-4565-b758-e668396a29e8-kube-api-access-dgxds" (OuterVolumeSpecName: "kube-api-access-dgxds") pod "abb9f962-bf59-4565-b758-e668396a29e8" (UID: "abb9f962-bf59-4565-b758-e668396a29e8"). InnerVolumeSpecName "kube-api-access-dgxds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.155424 4909 generic.go:334] "Generic (PLEG): container finished" podID="abb9f962-bf59-4565-b758-e668396a29e8" containerID="17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a" exitCode=0 Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.155472 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerDied","Data":"17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a"} Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.155494 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4gqn" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.155508 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4gqn" event={"ID":"abb9f962-bf59-4565-b758-e668396a29e8","Type":"ContainerDied","Data":"908e081c6927f106585f18c3fee980b35cd648fdf60d7c8c787f01ae643d9d07"} Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.155529 4909 scope.go:117] "RemoveContainer" containerID="17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.176336 4909 scope.go:117] "RemoveContainer" containerID="86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.196549 4909 scope.go:117] "RemoveContainer" containerID="b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.214150 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgxds\" (UniqueName: \"kubernetes.io/projected/abb9f962-bf59-4565-b758-e668396a29e8-kube-api-access-dgxds\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.214241 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.224744 4909 scope.go:117] "RemoveContainer" containerID="17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.225010 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abb9f962-bf59-4565-b758-e668396a29e8" (UID: "abb9f962-bf59-4565-b758-e668396a29e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:17:33 crc kubenswrapper[4909]: E1126 08:17:33.225226 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a\": container with ID starting with 17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a not found: ID does not exist" containerID="17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.225254 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a"} err="failed to get container status \"17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a\": rpc error: code = NotFound desc = could not find container \"17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a\": container with ID starting with 17a9e3ccdc08d857ad5707ded341f8cbce86b58189d92132722d644c82e7334a not found: ID does not exist" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.225284 4909 scope.go:117] "RemoveContainer" containerID="86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701" Nov 26 08:17:33 crc kubenswrapper[4909]: E1126 08:17:33.225666 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701\": container with ID starting with 86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701 not found: ID does not exist" containerID="86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.225709 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701"} err="failed to get container status \"86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701\": rpc error: code = NotFound desc = could not find container \"86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701\": container with ID starting with 86e4f45414a9555c40255f0188eaeec3075073bec3fb12909921297a9c86f701 not found: ID does not exist" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.225783 4909 scope.go:117] "RemoveContainer" containerID="b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef" Nov 26 08:17:33 crc kubenswrapper[4909]: E1126 08:17:33.226128 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef\": container with ID starting with b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef not found: ID does not exist" containerID="b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.226173 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef"} err="failed to get container status \"b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef\": rpc error: code = NotFound desc = could not find container \"b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef\": container with ID starting with b9669847c0f0fb48e478a1f294559e03152d3b052393cb606f3d0cbf24370fef not found: ID does not exist" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.316085 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9f962-bf59-4565-b758-e668396a29e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.496661 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4gqn"] Nov 26 08:17:33 crc kubenswrapper[4909]: I1126 08:17:33.505330 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4gqn"] Nov 26 08:17:34 crc kubenswrapper[4909]: I1126 08:17:34.514226 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb9f962-bf59-4565-b758-e668396a29e8" path="/var/lib/kubelet/pods/abb9f962-bf59-4565-b758-e668396a29e8/volumes" Nov 26 08:17:37 crc kubenswrapper[4909]: E1126 08:17:37.457139 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb783ab6a_d590_4bf8_b577_aa676da17499.slice/crio-a339caed82dead38d8a0c2b65c5975a0badf5ef502f069c180c00cbb49972b75.scope\": RecentStats: unable to find data in memory cache]" Nov 26 08:17:38 crc kubenswrapper[4909]: I1126 08:17:38.200816 4909 generic.go:334] "Generic (PLEG): container finished" podID="b783ab6a-d590-4bf8-b577-aa676da17499" containerID="a339caed82dead38d8a0c2b65c5975a0badf5ef502f069c180c00cbb49972b75" exitCode=0 Nov 26 08:17:38 crc kubenswrapper[4909]: I1126 08:17:38.200908 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b783ab6a-d590-4bf8-b577-aa676da17499","Type":"ContainerDied","Data":"a339caed82dead38d8a0c2b65c5975a0badf5ef502f069c180c00cbb49972b75"} Nov 26 08:17:38 crc kubenswrapper[4909]: I1126 08:17:38.202808 4909 generic.go:334] "Generic (PLEG): container finished" podID="3484134b-9037-4281-8f33-b61c0fcc4337" containerID="3738a117ab490cf968d9e64d200b9a14b344000dc0bb18c6bc3f986b78cffa2a" exitCode=0 Nov 26 08:17:38 crc kubenswrapper[4909]: I1126 08:17:38.202849 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3484134b-9037-4281-8f33-b61c0fcc4337","Type":"ContainerDied","Data":"3738a117ab490cf968d9e64d200b9a14b344000dc0bb18c6bc3f986b78cffa2a"} Nov 26 08:17:39 crc kubenswrapper[4909]: I1126 08:17:39.217227 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3484134b-9037-4281-8f33-b61c0fcc4337","Type":"ContainerStarted","Data":"5d231db92bfda3710339425bbeb1fdd33c6905868377fab0e6ad7bc7826dfd58"} Nov 26 08:17:39 crc kubenswrapper[4909]: I1126 08:17:39.217790 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:17:39 crc kubenswrapper[4909]: I1126 08:17:39.220348 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b783ab6a-d590-4bf8-b577-aa676da17499","Type":"ContainerStarted","Data":"4a77c86a46177d57dd99dbb146d80df682a3b8ea718acdb9fd1e0eacc52a86b0"} Nov 26 08:17:39 crc kubenswrapper[4909]: I1126 08:17:39.220952 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 26 08:17:39 crc kubenswrapper[4909]: I1126 08:17:39.250124 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.250102267 podStartE2EDuration="36.250102267s" podCreationTimestamp="2025-11-26 08:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:17:39.24800824 +0000 UTC m=+4631.394219436" watchObservedRunningTime="2025-11-26 08:17:39.250102267 +0000 UTC m=+4631.396313433" Nov 26 08:17:39 crc kubenswrapper[4909]: I1126 08:17:39.282428 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.282410668 podStartE2EDuration="36.282410668s" podCreationTimestamp="2025-11-26 08:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:17:39.280763293 +0000 UTC m=+4631.426974459" watchObservedRunningTime="2025-11-26 08:17:39.282410668 +0000 UTC m=+4631.428621834" Nov 26 08:17:53 crc kubenswrapper[4909]: I1126 08:17:53.493554 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 26 08:17:53 crc kubenswrapper[4909]: I1126 08:17:53.826837 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.153454 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 26 08:18:05 crc kubenswrapper[4909]: E1126 08:18:05.154487 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="registry-server" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154507 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="registry-server" Nov 26 08:18:05 crc kubenswrapper[4909]: E1126 08:18:05.154521 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="extract-utilities" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154529 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="extract-utilities" Nov 26 08:18:05 crc kubenswrapper[4909]: E1126 08:18:05.154549 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="extract-utilities" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154558 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="extract-utilities" Nov 26 08:18:05 crc kubenswrapper[4909]: E1126 08:18:05.154568 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="extract-content" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154576 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="extract-content" Nov 26 08:18:05 crc kubenswrapper[4909]: E1126 08:18:05.154640 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="extract-content" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154649 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="extract-content" Nov 26 08:18:05 crc kubenswrapper[4909]: E1126 08:18:05.154663 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="registry-server" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154673 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="registry-server" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154880 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb9f962-bf59-4565-b758-e668396a29e8" containerName="registry-server" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.154916 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="002debfd-4eaa-47cc-ac84-2195a73504da" containerName="registry-server" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.155538 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.158443 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9fl9" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.162361 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.325627 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdbr\" (UniqueName: \"kubernetes.io/projected/fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2-kube-api-access-ttdbr\") pod \"mariadb-client-1-default\" (UID: \"fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2\") " pod="openstack/mariadb-client-1-default" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.427168 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttdbr\" (UniqueName: \"kubernetes.io/projected/fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2-kube-api-access-ttdbr\") pod \"mariadb-client-1-default\" (UID: \"fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2\") " pod="openstack/mariadb-client-1-default" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.462300 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttdbr\" (UniqueName: \"kubernetes.io/projected/fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2-kube-api-access-ttdbr\") pod \"mariadb-client-1-default\" (UID: \"fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2\") " pod="openstack/mariadb-client-1-default" Nov 26 08:18:05 crc kubenswrapper[4909]: I1126 08:18:05.500751 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 26 08:18:06 crc kubenswrapper[4909]: I1126 08:18:06.029014 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 26 08:18:06 crc kubenswrapper[4909]: I1126 08:18:06.456009 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2","Type":"ContainerStarted","Data":"5a296f6b121b4883205bf1a819f0fa2e536a8458bd54fad1750eee4813a04ed8"} Nov 26 08:18:07 crc kubenswrapper[4909]: I1126 08:18:07.469770 4909 generic.go:334] "Generic (PLEG): container finished" podID="fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2" containerID="3dbe581553ce8168711fe40da33103920380d42dcf942f45d4c5f7df10acd5db" exitCode=0 Nov 26 08:18:07 crc kubenswrapper[4909]: I1126 08:18:07.469818 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2","Type":"ContainerDied","Data":"3dbe581553ce8168711fe40da33103920380d42dcf942f45d4c5f7df10acd5db"} Nov 26 08:18:08 crc kubenswrapper[4909]: I1126 08:18:08.951422 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 26 08:18:08 crc kubenswrapper[4909]: I1126 08:18:08.982335 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2/mariadb-client-1-default/0.log" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.013388 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.019281 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.089401 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttdbr\" (UniqueName: \"kubernetes.io/projected/fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2-kube-api-access-ttdbr\") pod \"fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2\" (UID: \"fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2\") " Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.097341 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2-kube-api-access-ttdbr" (OuterVolumeSpecName: "kube-api-access-ttdbr") pod "fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2" (UID: "fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2"). InnerVolumeSpecName "kube-api-access-ttdbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.191709 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttdbr\" (UniqueName: \"kubernetes.io/projected/fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2-kube-api-access-ttdbr\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.495541 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a296f6b121b4883205bf1a819f0fa2e536a8458bd54fad1750eee4813a04ed8" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.495673 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.524247 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 26 08:18:09 crc kubenswrapper[4909]: E1126 08:18:09.524734 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2" containerName="mariadb-client-1-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.524756 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2" containerName="mariadb-client-1-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.525578 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2" containerName="mariadb-client-1-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.526378 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.532699 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.534243 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9fl9" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.699864 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xgs\" (UniqueName: \"kubernetes.io/projected/4ae3df18-3ffd-4779-8534-794dc6afce96-kube-api-access-46xgs\") pod \"mariadb-client-2-default\" (UID: \"4ae3df18-3ffd-4779-8534-794dc6afce96\") " pod="openstack/mariadb-client-2-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.801090 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46xgs\" (UniqueName: \"kubernetes.io/projected/4ae3df18-3ffd-4779-8534-794dc6afce96-kube-api-access-46xgs\") pod \"mariadb-client-2-default\" (UID: \"4ae3df18-3ffd-4779-8534-794dc6afce96\") " pod="openstack/mariadb-client-2-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.819775 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46xgs\" (UniqueName: \"kubernetes.io/projected/4ae3df18-3ffd-4779-8534-794dc6afce96-kube-api-access-46xgs\") pod \"mariadb-client-2-default\" (UID: \"4ae3df18-3ffd-4779-8534-794dc6afce96\") " pod="openstack/mariadb-client-2-default" Nov 26 08:18:09 crc kubenswrapper[4909]: I1126 08:18:09.854221 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 26 08:18:10 crc kubenswrapper[4909]: I1126 08:18:10.381463 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 26 08:18:10 crc kubenswrapper[4909]: W1126 08:18:10.385457 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ae3df18_3ffd_4779_8534_794dc6afce96.slice/crio-4085ff9fdf2f22bc0fb0882ae9e430b22a0d023cadd961d2404d435383ad5a15 WatchSource:0}: Error finding container 4085ff9fdf2f22bc0fb0882ae9e430b22a0d023cadd961d2404d435383ad5a15: Status 404 returned error can't find the container with id 4085ff9fdf2f22bc0fb0882ae9e430b22a0d023cadd961d2404d435383ad5a15 Nov 26 08:18:10 crc kubenswrapper[4909]: I1126 08:18:10.510226 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2" path="/var/lib/kubelet/pods/fcdf07c8-3097-45e1-9b58-a5b1e6e3bcb2/volumes" Nov 26 08:18:10 crc kubenswrapper[4909]: I1126 08:18:10.511521 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"4ae3df18-3ffd-4779-8534-794dc6afce96","Type":"ContainerStarted","Data":"4085ff9fdf2f22bc0fb0882ae9e430b22a0d023cadd961d2404d435383ad5a15"} Nov 26 08:18:11 crc kubenswrapper[4909]: I1126 08:18:11.516254 4909 generic.go:334] "Generic (PLEG): container finished" podID="4ae3df18-3ffd-4779-8534-794dc6afce96" containerID="73e671a5833510c819c3ab37952f95312ac41df893dcbfd4bedc45947527c74e" exitCode=1 Nov 26 08:18:11 crc kubenswrapper[4909]: I1126 08:18:11.516299 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"4ae3df18-3ffd-4779-8534-794dc6afce96","Type":"ContainerDied","Data":"73e671a5833510c819c3ab37952f95312ac41df893dcbfd4bedc45947527c74e"} Nov 26 08:18:12 crc kubenswrapper[4909]: I1126 08:18:12.896697 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 26 08:18:12 crc kubenswrapper[4909]: I1126 08:18:12.923151 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_4ae3df18-3ffd-4779-8534-794dc6afce96/mariadb-client-2-default/0.log" Nov 26 08:18:12 crc kubenswrapper[4909]: I1126 08:18:12.957340 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 26 08:18:12 crc kubenswrapper[4909]: I1126 08:18:12.964083 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.059650 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46xgs\" (UniqueName: \"kubernetes.io/projected/4ae3df18-3ffd-4779-8534-794dc6afce96-kube-api-access-46xgs\") pod \"4ae3df18-3ffd-4779-8534-794dc6afce96\" (UID: \"4ae3df18-3ffd-4779-8534-794dc6afce96\") " Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.068239 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae3df18-3ffd-4779-8534-794dc6afce96-kube-api-access-46xgs" (OuterVolumeSpecName: "kube-api-access-46xgs") pod "4ae3df18-3ffd-4779-8534-794dc6afce96" (UID: "4ae3df18-3ffd-4779-8534-794dc6afce96"). InnerVolumeSpecName "kube-api-access-46xgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.160840 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46xgs\" (UniqueName: \"kubernetes.io/projected/4ae3df18-3ffd-4779-8534-794dc6afce96-kube-api-access-46xgs\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.414374 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 26 08:18:13 crc kubenswrapper[4909]: E1126 08:18:13.414853 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae3df18-3ffd-4779-8534-794dc6afce96" containerName="mariadb-client-2-default" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.414881 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae3df18-3ffd-4779-8534-794dc6afce96" containerName="mariadb-client-2-default" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.415175 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae3df18-3ffd-4779-8534-794dc6afce96" containerName="mariadb-client-2-default" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.415871 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.415995 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.465453 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllsg\" (UniqueName: \"kubernetes.io/projected/71e0fc8c-8161-4149-ae54-b6101cd10163-kube-api-access-kllsg\") pod \"mariadb-client-1\" (UID: \"71e0fc8c-8161-4149-ae54-b6101cd10163\") " pod="openstack/mariadb-client-1" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.534115 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4085ff9fdf2f22bc0fb0882ae9e430b22a0d023cadd961d2404d435383ad5a15" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.534159 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.567262 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kllsg\" (UniqueName: \"kubernetes.io/projected/71e0fc8c-8161-4149-ae54-b6101cd10163-kube-api-access-kllsg\") pod \"mariadb-client-1\" (UID: \"71e0fc8c-8161-4149-ae54-b6101cd10163\") " pod="openstack/mariadb-client-1" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.584218 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllsg\" (UniqueName: \"kubernetes.io/projected/71e0fc8c-8161-4149-ae54-b6101cd10163-kube-api-access-kllsg\") pod \"mariadb-client-1\" (UID: \"71e0fc8c-8161-4149-ae54-b6101cd10163\") " pod="openstack/mariadb-client-1" Nov 26 08:18:13 crc kubenswrapper[4909]: I1126 08:18:13.745275 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 26 08:18:14 crc kubenswrapper[4909]: I1126 08:18:14.273369 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 26 08:18:14 crc kubenswrapper[4909]: W1126 08:18:14.276585 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71e0fc8c_8161_4149_ae54_b6101cd10163.slice/crio-7bcd538a7a0fee7d2111614285e5b0a5d2a46e797bc4e9663868409a76315382 WatchSource:0}: Error finding container 7bcd538a7a0fee7d2111614285e5b0a5d2a46e797bc4e9663868409a76315382: Status 404 returned error can't find the container with id 7bcd538a7a0fee7d2111614285e5b0a5d2a46e797bc4e9663868409a76315382 Nov 26 08:18:14 crc kubenswrapper[4909]: I1126 08:18:14.508756 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae3df18-3ffd-4779-8534-794dc6afce96" path="/var/lib/kubelet/pods/4ae3df18-3ffd-4779-8534-794dc6afce96/volumes" Nov 26 08:18:14 crc kubenswrapper[4909]: I1126 08:18:14.552247 4909 generic.go:334] "Generic (PLEG): container finished" podID="71e0fc8c-8161-4149-ae54-b6101cd10163" containerID="ea77dceda0d308f348a01ec565a68025da2cc562b6ac7b9eb37868f5656e314a" exitCode=0 Nov 26 08:18:14 crc kubenswrapper[4909]: I1126 08:18:14.552293 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"71e0fc8c-8161-4149-ae54-b6101cd10163","Type":"ContainerDied","Data":"ea77dceda0d308f348a01ec565a68025da2cc562b6ac7b9eb37868f5656e314a"} Nov 26 08:18:14 crc kubenswrapper[4909]: I1126 08:18:14.552323 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"71e0fc8c-8161-4149-ae54-b6101cd10163","Type":"ContainerStarted","Data":"7bcd538a7a0fee7d2111614285e5b0a5d2a46e797bc4e9663868409a76315382"} Nov 26 08:18:15 crc kubenswrapper[4909]: I1126 08:18:15.981388 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:15.999965 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_71e0fc8c-8161-4149-ae54-b6101cd10163/mariadb-client-1/0.log" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.030963 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.040494 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.106115 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kllsg\" (UniqueName: \"kubernetes.io/projected/71e0fc8c-8161-4149-ae54-b6101cd10163-kube-api-access-kllsg\") pod \"71e0fc8c-8161-4149-ae54-b6101cd10163\" (UID: \"71e0fc8c-8161-4149-ae54-b6101cd10163\") " Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.116857 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e0fc8c-8161-4149-ae54-b6101cd10163-kube-api-access-kllsg" (OuterVolumeSpecName: "kube-api-access-kllsg") pod "71e0fc8c-8161-4149-ae54-b6101cd10163" (UID: "71e0fc8c-8161-4149-ae54-b6101cd10163"). InnerVolumeSpecName "kube-api-access-kllsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.208833 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kllsg\" (UniqueName: \"kubernetes.io/projected/71e0fc8c-8161-4149-ae54-b6101cd10163-kube-api-access-kllsg\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.438393 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 26 08:18:16 crc kubenswrapper[4909]: E1126 08:18:16.438952 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e0fc8c-8161-4149-ae54-b6101cd10163" containerName="mariadb-client-1" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.438984 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e0fc8c-8161-4149-ae54-b6101cd10163" containerName="mariadb-client-1" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.439235 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e0fc8c-8161-4149-ae54-b6101cd10163" containerName="mariadb-client-1" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.440079 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.449367 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.511827 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e0fc8c-8161-4149-ae54-b6101cd10163" path="/var/lib/kubelet/pods/71e0fc8c-8161-4149-ae54-b6101cd10163/volumes" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.513926 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxw7s\" (UniqueName: \"kubernetes.io/projected/0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41-kube-api-access-bxw7s\") pod \"mariadb-client-4-default\" (UID: \"0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41\") " pod="openstack/mariadb-client-4-default" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.575649 4909 scope.go:117] "RemoveContainer" containerID="ea77dceda0d308f348a01ec565a68025da2cc562b6ac7b9eb37868f5656e314a" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.575673 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.615400 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxw7s\" (UniqueName: \"kubernetes.io/projected/0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41-kube-api-access-bxw7s\") pod \"mariadb-client-4-default\" (UID: \"0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41\") " pod="openstack/mariadb-client-4-default" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.639761 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxw7s\" (UniqueName: \"kubernetes.io/projected/0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41-kube-api-access-bxw7s\") pod \"mariadb-client-4-default\" (UID: \"0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41\") " pod="openstack/mariadb-client-4-default" Nov 26 08:18:16 crc kubenswrapper[4909]: I1126 08:18:16.770252 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 26 08:18:17 crc kubenswrapper[4909]: W1126 08:18:17.286695 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a1e3cc4_ebdb_4225_bdcd_dcb95bf98f41.slice/crio-7d50fdbdbb8234b24e4bd699cfe8db0c27ab6817ebda4d441e2df88c5cf57fef WatchSource:0}: Error finding container 7d50fdbdbb8234b24e4bd699cfe8db0c27ab6817ebda4d441e2df88c5cf57fef: Status 404 returned error can't find the container with id 7d50fdbdbb8234b24e4bd699cfe8db0c27ab6817ebda4d441e2df88c5cf57fef Nov 26 08:18:17 crc kubenswrapper[4909]: I1126 08:18:17.288985 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 26 08:18:17 crc kubenswrapper[4909]: I1126 08:18:17.588992 4909 generic.go:334] "Generic (PLEG): container finished" podID="0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41" containerID="1adac554221a353b2c9bc846f58f079fc71f9856e5847551a7238675e8f591c3" exitCode=0 Nov 26 08:18:17 crc kubenswrapper[4909]: I1126 08:18:17.589062 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41","Type":"ContainerDied","Data":"1adac554221a353b2c9bc846f58f079fc71f9856e5847551a7238675e8f591c3"} Nov 26 08:18:17 crc kubenswrapper[4909]: I1126 08:18:17.589397 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41","Type":"ContainerStarted","Data":"7d50fdbdbb8234b24e4bd699cfe8db0c27ab6817ebda4d441e2df88c5cf57fef"} Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.107925 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.129868 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41/mariadb-client-4-default/0.log" Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.156697 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.166794 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.196382 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxw7s\" (UniqueName: \"kubernetes.io/projected/0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41-kube-api-access-bxw7s\") pod \"0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41\" (UID: \"0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41\") " Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.203141 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41-kube-api-access-bxw7s" (OuterVolumeSpecName: "kube-api-access-bxw7s") pod "0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41" (UID: "0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41"). InnerVolumeSpecName "kube-api-access-bxw7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.298226 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxw7s\" (UniqueName: \"kubernetes.io/projected/0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41-kube-api-access-bxw7s\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.612496 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d50fdbdbb8234b24e4bd699cfe8db0c27ab6817ebda4d441e2df88c5cf57fef" Nov 26 08:18:19 crc kubenswrapper[4909]: I1126 08:18:19.612616 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 26 08:18:20 crc kubenswrapper[4909]: I1126 08:18:20.515058 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41" path="/var/lib/kubelet/pods/0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41/volumes" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.276732 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 26 08:18:24 crc kubenswrapper[4909]: E1126 08:18:24.277703 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41" containerName="mariadb-client-4-default" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.277726 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41" containerName="mariadb-client-4-default" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.277982 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1e3cc4-ebdb-4225-bdcd-dcb95bf98f41" containerName="mariadb-client-4-default" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.278826 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.281261 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9fl9" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.288761 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.370835 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvp8\" (UniqueName: \"kubernetes.io/projected/0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee-kube-api-access-vpvp8\") pod \"mariadb-client-5-default\" (UID: \"0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee\") " pod="openstack/mariadb-client-5-default" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.472993 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvp8\" (UniqueName: \"kubernetes.io/projected/0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee-kube-api-access-vpvp8\") pod \"mariadb-client-5-default\" (UID: \"0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee\") " pod="openstack/mariadb-client-5-default" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.497866 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvp8\" (UniqueName: \"kubernetes.io/projected/0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee-kube-api-access-vpvp8\") pod \"mariadb-client-5-default\" (UID: \"0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee\") " pod="openstack/mariadb-client-5-default" Nov 26 08:18:24 crc kubenswrapper[4909]: I1126 08:18:24.615157 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 26 08:18:25 crc kubenswrapper[4909]: I1126 08:18:25.095221 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 26 08:18:25 crc kubenswrapper[4909]: I1126 08:18:25.660794 4909 generic.go:334] "Generic (PLEG): container finished" podID="0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee" containerID="54a072066f44826ab02982db86fa001471015a7111a0bb65717e3e06f83a8837" exitCode=0 Nov 26 08:18:25 crc kubenswrapper[4909]: I1126 08:18:25.660999 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee","Type":"ContainerDied","Data":"54a072066f44826ab02982db86fa001471015a7111a0bb65717e3e06f83a8837"} Nov 26 08:18:25 crc kubenswrapper[4909]: I1126 08:18:25.661262 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee","Type":"ContainerStarted","Data":"a2f6b7b95470562e21fff792b7e02729b394fedbdf427f1b097ebb69b10e4c38"} Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.018980 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.036998 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee/mariadb-client-5-default/0.log" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.064575 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.070693 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.108895 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpvp8\" (UniqueName: \"kubernetes.io/projected/0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee-kube-api-access-vpvp8\") pod \"0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee\" (UID: \"0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee\") " Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.115782 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee-kube-api-access-vpvp8" (OuterVolumeSpecName: "kube-api-access-vpvp8") pod "0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee" (UID: "0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee"). InnerVolumeSpecName "kube-api-access-vpvp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.191311 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 26 08:18:27 crc kubenswrapper[4909]: E1126 08:18:27.191700 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee" containerName="mariadb-client-5-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.191719 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee" containerName="mariadb-client-5-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.191940 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee" containerName="mariadb-client-5-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.192690 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.198297 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.211030 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpvp8\" (UniqueName: \"kubernetes.io/projected/0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee-kube-api-access-vpvp8\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.312312 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8p7x\" (UniqueName: \"kubernetes.io/projected/33ce6327-f2cb-40a4-b0a3-c2e0b22c1995-kube-api-access-d8p7x\") pod \"mariadb-client-6-default\" (UID: \"33ce6327-f2cb-40a4-b0a3-c2e0b22c1995\") " pod="openstack/mariadb-client-6-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.413788 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8p7x\" (UniqueName: \"kubernetes.io/projected/33ce6327-f2cb-40a4-b0a3-c2e0b22c1995-kube-api-access-d8p7x\") pod \"mariadb-client-6-default\" (UID: \"33ce6327-f2cb-40a4-b0a3-c2e0b22c1995\") " pod="openstack/mariadb-client-6-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.436819 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8p7x\" (UniqueName: \"kubernetes.io/projected/33ce6327-f2cb-40a4-b0a3-c2e0b22c1995-kube-api-access-d8p7x\") pod \"mariadb-client-6-default\" (UID: \"33ce6327-f2cb-40a4-b0a3-c2e0b22c1995\") " pod="openstack/mariadb-client-6-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.518975 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.685644 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f6b7b95470562e21fff792b7e02729b394fedbdf427f1b097ebb69b10e4c38" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.685718 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 26 08:18:27 crc kubenswrapper[4909]: I1126 08:18:27.868368 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 26 08:18:27 crc kubenswrapper[4909]: W1126 08:18:27.872560 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ce6327_f2cb_40a4_b0a3_c2e0b22c1995.slice/crio-32762b45133d41c04e94479f642e7e99a8089eff7970244c9a7147163cb909c9 WatchSource:0}: Error finding container 32762b45133d41c04e94479f642e7e99a8089eff7970244c9a7147163cb909c9: Status 404 returned error can't find the container with id 32762b45133d41c04e94479f642e7e99a8089eff7970244c9a7147163cb909c9 Nov 26 08:18:28 crc kubenswrapper[4909]: I1126 08:18:28.514332 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee" path="/var/lib/kubelet/pods/0f1ef8c3-05e6-4b9d-bf4c-1fc6f87755ee/volumes" Nov 26 08:18:28 crc kubenswrapper[4909]: E1126 08:18:28.571845 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ce6327_f2cb_40a4_b0a3_c2e0b22c1995.slice/crio-conmon-c069615bfb8296f0ad7875dfc6b81b4105d9e9e339c61cfc93a928c9a55fe023.scope\": RecentStats: unable to find data in memory cache]" Nov 26 08:18:28 crc kubenswrapper[4909]: I1126 08:18:28.694196 4909 generic.go:334] "Generic (PLEG): container finished" podID="33ce6327-f2cb-40a4-b0a3-c2e0b22c1995" containerID="c069615bfb8296f0ad7875dfc6b81b4105d9e9e339c61cfc93a928c9a55fe023" exitCode=1 Nov 26 08:18:28 crc kubenswrapper[4909]: I1126 08:18:28.694245 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"33ce6327-f2cb-40a4-b0a3-c2e0b22c1995","Type":"ContainerDied","Data":"c069615bfb8296f0ad7875dfc6b81b4105d9e9e339c61cfc93a928c9a55fe023"} Nov 26 08:18:28 crc kubenswrapper[4909]: I1126 08:18:28.694275 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"33ce6327-f2cb-40a4-b0a3-c2e0b22c1995","Type":"ContainerStarted","Data":"32762b45133d41c04e94479f642e7e99a8089eff7970244c9a7147163cb909c9"} Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.135002 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.154225 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_33ce6327-f2cb-40a4-b0a3-c2e0b22c1995/mariadb-client-6-default/0.log" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.179787 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.185868 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.258062 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8p7x\" (UniqueName: \"kubernetes.io/projected/33ce6327-f2cb-40a4-b0a3-c2e0b22c1995-kube-api-access-d8p7x\") pod \"33ce6327-f2cb-40a4-b0a3-c2e0b22c1995\" (UID: \"33ce6327-f2cb-40a4-b0a3-c2e0b22c1995\") " Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.267774 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ce6327-f2cb-40a4-b0a3-c2e0b22c1995-kube-api-access-d8p7x" (OuterVolumeSpecName: "kube-api-access-d8p7x") pod "33ce6327-f2cb-40a4-b0a3-c2e0b22c1995" (UID: "33ce6327-f2cb-40a4-b0a3-c2e0b22c1995"). InnerVolumeSpecName "kube-api-access-d8p7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.318873 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 26 08:18:30 crc kubenswrapper[4909]: E1126 08:18:30.319266 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ce6327-f2cb-40a4-b0a3-c2e0b22c1995" containerName="mariadb-client-6-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.319283 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ce6327-f2cb-40a4-b0a3-c2e0b22c1995" containerName="mariadb-client-6-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.319451 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ce6327-f2cb-40a4-b0a3-c2e0b22c1995" containerName="mariadb-client-6-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.320085 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.325375 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.359852 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc8pv\" (UniqueName: \"kubernetes.io/projected/e8998734-258a-44b8-90b8-ccda65b70dde-kube-api-access-nc8pv\") pod \"mariadb-client-7-default\" (UID: \"e8998734-258a-44b8-90b8-ccda65b70dde\") " pod="openstack/mariadb-client-7-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.359921 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8p7x\" (UniqueName: \"kubernetes.io/projected/33ce6327-f2cb-40a4-b0a3-c2e0b22c1995-kube-api-access-d8p7x\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.460570 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc8pv\" (UniqueName: \"kubernetes.io/projected/e8998734-258a-44b8-90b8-ccda65b70dde-kube-api-access-nc8pv\") pod \"mariadb-client-7-default\" (UID: \"e8998734-258a-44b8-90b8-ccda65b70dde\") " pod="openstack/mariadb-client-7-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.477082 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc8pv\" (UniqueName: \"kubernetes.io/projected/e8998734-258a-44b8-90b8-ccda65b70dde-kube-api-access-nc8pv\") pod \"mariadb-client-7-default\" (UID: \"e8998734-258a-44b8-90b8-ccda65b70dde\") " pod="openstack/mariadb-client-7-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.514246 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ce6327-f2cb-40a4-b0a3-c2e0b22c1995" path="/var/lib/kubelet/pods/33ce6327-f2cb-40a4-b0a3-c2e0b22c1995/volumes" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.640673 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.749825 4909 scope.go:117] "RemoveContainer" containerID="c069615bfb8296f0ad7875dfc6b81b4105d9e9e339c61cfc93a928c9a55fe023" Nov 26 08:18:30 crc kubenswrapper[4909]: I1126 08:18:30.749975 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 26 08:18:31 crc kubenswrapper[4909]: I1126 08:18:31.174067 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 26 08:18:31 crc kubenswrapper[4909]: I1126 08:18:31.758799 4909 generic.go:334] "Generic (PLEG): container finished" podID="e8998734-258a-44b8-90b8-ccda65b70dde" containerID="a00cc0a4ab129dbbfeaff9c9bc05ad28e52def8bb85dd77423f528a6d00b7e52" exitCode=0 Nov 26 08:18:31 crc kubenswrapper[4909]: I1126 08:18:31.758895 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"e8998734-258a-44b8-90b8-ccda65b70dde","Type":"ContainerDied","Data":"a00cc0a4ab129dbbfeaff9c9bc05ad28e52def8bb85dd77423f528a6d00b7e52"} Nov 26 08:18:31 crc kubenswrapper[4909]: I1126 08:18:31.758949 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"e8998734-258a-44b8-90b8-ccda65b70dde","Type":"ContainerStarted","Data":"7b3d241108dd224ce051c40a22f9faddb87fa0503d5314c599148640a873d1e3"} Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.121470 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.137253 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_e8998734-258a-44b8-90b8-ccda65b70dde/mariadb-client-7-default/0.log" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.161465 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.169870 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.277637 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 26 08:18:33 crc kubenswrapper[4909]: E1126 08:18:33.279417 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8998734-258a-44b8-90b8-ccda65b70dde" containerName="mariadb-client-7-default" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.279448 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8998734-258a-44b8-90b8-ccda65b70dde" containerName="mariadb-client-7-default" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.279713 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8998734-258a-44b8-90b8-ccda65b70dde" containerName="mariadb-client-7-default" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.324698 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc8pv\" (UniqueName: \"kubernetes.io/projected/e8998734-258a-44b8-90b8-ccda65b70dde-kube-api-access-nc8pv\") pod \"e8998734-258a-44b8-90b8-ccda65b70dde\" (UID: \"e8998734-258a-44b8-90b8-ccda65b70dde\") " Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.324801 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.324969 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.331759 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8998734-258a-44b8-90b8-ccda65b70dde-kube-api-access-nc8pv" (OuterVolumeSpecName: "kube-api-access-nc8pv") pod "e8998734-258a-44b8-90b8-ccda65b70dde" (UID: "e8998734-258a-44b8-90b8-ccda65b70dde"). InnerVolumeSpecName "kube-api-access-nc8pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.425971 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x85k\" (UniqueName: \"kubernetes.io/projected/23210729-1264-446b-a1bd-1da1bd7f4947-kube-api-access-6x85k\") pod \"mariadb-client-2\" (UID: \"23210729-1264-446b-a1bd-1da1bd7f4947\") " pod="openstack/mariadb-client-2" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.426078 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc8pv\" (UniqueName: \"kubernetes.io/projected/e8998734-258a-44b8-90b8-ccda65b70dde-kube-api-access-nc8pv\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.527526 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x85k\" (UniqueName: \"kubernetes.io/projected/23210729-1264-446b-a1bd-1da1bd7f4947-kube-api-access-6x85k\") pod \"mariadb-client-2\" (UID: \"23210729-1264-446b-a1bd-1da1bd7f4947\") " pod="openstack/mariadb-client-2" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.545210 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x85k\" (UniqueName: \"kubernetes.io/projected/23210729-1264-446b-a1bd-1da1bd7f4947-kube-api-access-6x85k\") pod \"mariadb-client-2\" (UID: \"23210729-1264-446b-a1bd-1da1bd7f4947\") " pod="openstack/mariadb-client-2" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.667220 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.786877 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b3d241108dd224ce051c40a22f9faddb87fa0503d5314c599148640a873d1e3" Nov 26 08:18:33 crc kubenswrapper[4909]: I1126 08:18:33.786940 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 26 08:18:34 crc kubenswrapper[4909]: I1126 08:18:34.200533 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 26 08:18:34 crc kubenswrapper[4909]: W1126 08:18:34.208969 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23210729_1264_446b_a1bd_1da1bd7f4947.slice/crio-8527a8c619256444792971f403f43ade09eb1e4f0c1a56f7dd8e919efea5fb28 WatchSource:0}: Error finding container 8527a8c619256444792971f403f43ade09eb1e4f0c1a56f7dd8e919efea5fb28: Status 404 returned error can't find the container with id 8527a8c619256444792971f403f43ade09eb1e4f0c1a56f7dd8e919efea5fb28 Nov 26 08:18:34 crc kubenswrapper[4909]: I1126 08:18:34.509272 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8998734-258a-44b8-90b8-ccda65b70dde" path="/var/lib/kubelet/pods/e8998734-258a-44b8-90b8-ccda65b70dde/volumes" Nov 26 08:18:34 crc kubenswrapper[4909]: I1126 08:18:34.801576 4909 generic.go:334] "Generic (PLEG): container finished" podID="23210729-1264-446b-a1bd-1da1bd7f4947" containerID="1b1a7f9e7cdcc8673009aa38a34b564654f76df5eff64f2dafa66e03009220e5" exitCode=0 Nov 26 08:18:34 crc kubenswrapper[4909]: I1126 08:18:34.801741 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"23210729-1264-446b-a1bd-1da1bd7f4947","Type":"ContainerDied","Data":"1b1a7f9e7cdcc8673009aa38a34b564654f76df5eff64f2dafa66e03009220e5"} Nov 26 08:18:34 crc kubenswrapper[4909]: I1126 08:18:34.801790 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"23210729-1264-446b-a1bd-1da1bd7f4947","Type":"ContainerStarted","Data":"8527a8c619256444792971f403f43ade09eb1e4f0c1a56f7dd8e919efea5fb28"} Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.234906 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.253392 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_23210729-1264-446b-a1bd-1da1bd7f4947/mariadb-client-2/0.log" Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.281328 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.288443 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.412992 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x85k\" (UniqueName: \"kubernetes.io/projected/23210729-1264-446b-a1bd-1da1bd7f4947-kube-api-access-6x85k\") pod \"23210729-1264-446b-a1bd-1da1bd7f4947\" (UID: \"23210729-1264-446b-a1bd-1da1bd7f4947\") " Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.421090 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23210729-1264-446b-a1bd-1da1bd7f4947-kube-api-access-6x85k" (OuterVolumeSpecName: "kube-api-access-6x85k") pod "23210729-1264-446b-a1bd-1da1bd7f4947" (UID: "23210729-1264-446b-a1bd-1da1bd7f4947"). InnerVolumeSpecName "kube-api-access-6x85k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.509372 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23210729-1264-446b-a1bd-1da1bd7f4947" path="/var/lib/kubelet/pods/23210729-1264-446b-a1bd-1da1bd7f4947/volumes" Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.516215 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x85k\" (UniqueName: \"kubernetes.io/projected/23210729-1264-446b-a1bd-1da1bd7f4947-kube-api-access-6x85k\") on node \"crc\" DevicePath \"\"" Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.820950 4909 scope.go:117] "RemoveContainer" containerID="1b1a7f9e7cdcc8673009aa38a34b564654f76df5eff64f2dafa66e03009220e5" Nov 26 08:18:36 crc kubenswrapper[4909]: I1126 08:18:36.820999 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 26 08:19:07 crc kubenswrapper[4909]: I1126 08:19:07.300802 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:19:07 crc kubenswrapper[4909]: I1126 08:19:07.301415 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:19:37 crc kubenswrapper[4909]: I1126 08:19:37.300903 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:19:37 crc kubenswrapper[4909]: I1126 08:19:37.301505 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:19:37 crc kubenswrapper[4909]: I1126 08:19:37.361574 4909 scope.go:117] "RemoveContainer" containerID="c44bb7f8fb8138ed205a2a4236b5703cbba0418bf653ca78683b63f6f1ee2578" Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.301293 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.302177 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.302242 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.303211 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d02eb9ff541f068efba31560f6d49bd8fa1db4909c006e87655b88c2586e0c0"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.303269 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://1d02eb9ff541f068efba31560f6d49bd8fa1db4909c006e87655b88c2586e0c0" gracePeriod=600 Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.587173 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="1d02eb9ff541f068efba31560f6d49bd8fa1db4909c006e87655b88c2586e0c0" exitCode=0 Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.587226 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"1d02eb9ff541f068efba31560f6d49bd8fa1db4909c006e87655b88c2586e0c0"} Nov 26 08:20:07 crc kubenswrapper[4909]: I1126 08:20:07.587569 4909 scope.go:117] "RemoveContainer" containerID="a1899db6b622d9c6daba3040330871a5846d347bb6adc29a5e189ed190343ab5" Nov 26 08:20:08 crc kubenswrapper[4909]: I1126 08:20:08.598465 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36"} Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.052226 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 26 08:21:59 crc kubenswrapper[4909]: E1126 08:21:59.053089 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23210729-1264-446b-a1bd-1da1bd7f4947" containerName="mariadb-client-2" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.053105 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="23210729-1264-446b-a1bd-1da1bd7f4947" containerName="mariadb-client-2" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.053273 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="23210729-1264-446b-a1bd-1da1bd7f4947" containerName="mariadb-client-2" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.053779 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.057205 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9fl9" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.064046 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.174879 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnf44\" (UniqueName: \"kubernetes.io/projected/c15d16fb-aa11-4bdd-b044-c8fd74f693b8-kube-api-access-wnf44\") pod \"mariadb-copy-data\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.175233 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\") pod \"mariadb-copy-data\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.276965 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\") pod \"mariadb-copy-data\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.277068 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnf44\" (UniqueName: \"kubernetes.io/projected/c15d16fb-aa11-4bdd-b044-c8fd74f693b8-kube-api-access-wnf44\") pod \"mariadb-copy-data\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.279646 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.279676 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\") pod \"mariadb-copy-data\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/acf228919b58e98e843ce08599e637146aa5928ec5af0b83adfe3ca04e96f78b/globalmount\"" pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.301161 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnf44\" (UniqueName: \"kubernetes.io/projected/c15d16fb-aa11-4bdd-b044-c8fd74f693b8-kube-api-access-wnf44\") pod \"mariadb-copy-data\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.316810 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\") pod \"mariadb-copy-data\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.380227 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 26 08:21:59 crc kubenswrapper[4909]: I1126 08:21:59.932759 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 26 08:22:00 crc kubenswrapper[4909]: I1126 08:22:00.589086 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c15d16fb-aa11-4bdd-b044-c8fd74f693b8","Type":"ContainerStarted","Data":"3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032"} Nov 26 08:22:00 crc kubenswrapper[4909]: I1126 08:22:00.589372 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c15d16fb-aa11-4bdd-b044-c8fd74f693b8","Type":"ContainerStarted","Data":"9ad5d69ded62a2e391d71edccfe17d7bf1f60fefefebe69f582dd176c0bf78b1"} Nov 26 08:22:00 crc kubenswrapper[4909]: I1126 08:22:00.607885 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.607865861 podStartE2EDuration="2.607865861s" podCreationTimestamp="2025-11-26 08:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:00.601408396 +0000 UTC m=+4892.747619562" watchObservedRunningTime="2025-11-26 08:22:00.607865861 +0000 UTC m=+4892.754077027" Nov 26 08:22:03 crc kubenswrapper[4909]: I1126 08:22:03.475345 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:03 crc kubenswrapper[4909]: I1126 08:22:03.476854 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:03 crc kubenswrapper[4909]: I1126 08:22:03.484664 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:03 crc kubenswrapper[4909]: I1126 08:22:03.555603 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgqhl\" (UniqueName: \"kubernetes.io/projected/2946f32b-ea52-4018-b3c7-b45df48e3e8e-kube-api-access-hgqhl\") pod \"mariadb-client\" (UID: \"2946f32b-ea52-4018-b3c7-b45df48e3e8e\") " pod="openstack/mariadb-client" Nov 26 08:22:03 crc kubenswrapper[4909]: I1126 08:22:03.657554 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgqhl\" (UniqueName: \"kubernetes.io/projected/2946f32b-ea52-4018-b3c7-b45df48e3e8e-kube-api-access-hgqhl\") pod \"mariadb-client\" (UID: \"2946f32b-ea52-4018-b3c7-b45df48e3e8e\") " pod="openstack/mariadb-client" Nov 26 08:22:03 crc kubenswrapper[4909]: I1126 08:22:03.681174 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgqhl\" (UniqueName: \"kubernetes.io/projected/2946f32b-ea52-4018-b3c7-b45df48e3e8e-kube-api-access-hgqhl\") pod \"mariadb-client\" (UID: \"2946f32b-ea52-4018-b3c7-b45df48e3e8e\") " pod="openstack/mariadb-client" Nov 26 08:22:03 crc kubenswrapper[4909]: I1126 08:22:03.796981 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:04 crc kubenswrapper[4909]: I1126 08:22:04.227312 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:04 crc kubenswrapper[4909]: I1126 08:22:04.623389 4909 generic.go:334] "Generic (PLEG): container finished" podID="2946f32b-ea52-4018-b3c7-b45df48e3e8e" containerID="5cd3c250af23ef04a9f6714bca0a11e4d3494d05faa9239c2f8a0671bf68cb4c" exitCode=0 Nov 26 08:22:04 crc kubenswrapper[4909]: I1126 08:22:04.623436 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"2946f32b-ea52-4018-b3c7-b45df48e3e8e","Type":"ContainerDied","Data":"5cd3c250af23ef04a9f6714bca0a11e4d3494d05faa9239c2f8a0671bf68cb4c"} Nov 26 08:22:04 crc kubenswrapper[4909]: I1126 08:22:04.623476 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"2946f32b-ea52-4018-b3c7-b45df48e3e8e","Type":"ContainerStarted","Data":"49f6e5f18da3169efccc81ca8267f3b43d0b5bbafc56cc3db38e68747bd3460c"} Nov 26 08:22:05 crc kubenswrapper[4909]: I1126 08:22:05.901386 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:05 crc kubenswrapper[4909]: I1126 08:22:05.925473 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_2946f32b-ea52-4018-b3c7-b45df48e3e8e/mariadb-client/0.log" Nov 26 08:22:05 crc kubenswrapper[4909]: I1126 08:22:05.957649 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:05 crc kubenswrapper[4909]: I1126 08:22:05.965654 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:05 crc kubenswrapper[4909]: I1126 08:22:05.998349 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgqhl\" (UniqueName: \"kubernetes.io/projected/2946f32b-ea52-4018-b3c7-b45df48e3e8e-kube-api-access-hgqhl\") pod \"2946f32b-ea52-4018-b3c7-b45df48e3e8e\" (UID: \"2946f32b-ea52-4018-b3c7-b45df48e3e8e\") " Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.003687 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2946f32b-ea52-4018-b3c7-b45df48e3e8e-kube-api-access-hgqhl" (OuterVolumeSpecName: "kube-api-access-hgqhl") pod "2946f32b-ea52-4018-b3c7-b45df48e3e8e" (UID: "2946f32b-ea52-4018-b3c7-b45df48e3e8e"). InnerVolumeSpecName "kube-api-access-hgqhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.085989 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:06 crc kubenswrapper[4909]: E1126 08:22:06.086505 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2946f32b-ea52-4018-b3c7-b45df48e3e8e" containerName="mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.086522 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2946f32b-ea52-4018-b3c7-b45df48e3e8e" containerName="mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.086791 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2946f32b-ea52-4018-b3c7-b45df48e3e8e" containerName="mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.087412 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.092832 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.101425 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgqhl\" (UniqueName: \"kubernetes.io/projected/2946f32b-ea52-4018-b3c7-b45df48e3e8e-kube-api-access-hgqhl\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.203263 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vtd\" (UniqueName: \"kubernetes.io/projected/44342625-b514-4cb5-96cf-f59ccab74879-kube-api-access-z8vtd\") pod \"mariadb-client\" (UID: \"44342625-b514-4cb5-96cf-f59ccab74879\") " pod="openstack/mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.304788 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vtd\" (UniqueName: \"kubernetes.io/projected/44342625-b514-4cb5-96cf-f59ccab74879-kube-api-access-z8vtd\") pod \"mariadb-client\" (UID: \"44342625-b514-4cb5-96cf-f59ccab74879\") " pod="openstack/mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.322816 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vtd\" (UniqueName: \"kubernetes.io/projected/44342625-b514-4cb5-96cf-f59ccab74879-kube-api-access-z8vtd\") pod \"mariadb-client\" (UID: \"44342625-b514-4cb5-96cf-f59ccab74879\") " pod="openstack/mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.412244 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.521438 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2946f32b-ea52-4018-b3c7-b45df48e3e8e" path="/var/lib/kubelet/pods/2946f32b-ea52-4018-b3c7-b45df48e3e8e/volumes" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.649657 4909 scope.go:117] "RemoveContainer" containerID="5cd3c250af23ef04a9f6714bca0a11e4d3494d05faa9239c2f8a0671bf68cb4c" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.649683 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:06 crc kubenswrapper[4909]: I1126 08:22:06.843477 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:07 crc kubenswrapper[4909]: I1126 08:22:07.301505 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:22:07 crc kubenswrapper[4909]: I1126 08:22:07.301793 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:22:07 crc kubenswrapper[4909]: I1126 08:22:07.662341 4909 generic.go:334] "Generic (PLEG): container finished" podID="44342625-b514-4cb5-96cf-f59ccab74879" containerID="afaed7b92dbe727c2bb753f9d5e6c331b3f8acd71e706d4bfd4dedcb2efec108" exitCode=0 Nov 26 08:22:07 crc kubenswrapper[4909]: I1126 08:22:07.662415 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"44342625-b514-4cb5-96cf-f59ccab74879","Type":"ContainerDied","Data":"afaed7b92dbe727c2bb753f9d5e6c331b3f8acd71e706d4bfd4dedcb2efec108"} Nov 26 08:22:07 crc kubenswrapper[4909]: I1126 08:22:07.662477 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"44342625-b514-4cb5-96cf-f59ccab74879","Type":"ContainerStarted","Data":"65dce7cc628b317f06d31bdc6355677cf43701b61641ea31eaeef1607f615659"} Nov 26 08:22:08 crc kubenswrapper[4909]: I1126 08:22:08.979710 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:08 crc kubenswrapper[4909]: I1126 08:22:08.999603 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_44342625-b514-4cb5-96cf-f59ccab74879/mariadb-client/0.log" Nov 26 08:22:09 crc kubenswrapper[4909]: I1126 08:22:09.025247 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:09 crc kubenswrapper[4909]: I1126 08:22:09.039826 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 26 08:22:09 crc kubenswrapper[4909]: I1126 08:22:09.052023 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8vtd\" (UniqueName: \"kubernetes.io/projected/44342625-b514-4cb5-96cf-f59ccab74879-kube-api-access-z8vtd\") pod \"44342625-b514-4cb5-96cf-f59ccab74879\" (UID: \"44342625-b514-4cb5-96cf-f59ccab74879\") " Nov 26 08:22:09 crc kubenswrapper[4909]: I1126 08:22:09.057809 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44342625-b514-4cb5-96cf-f59ccab74879-kube-api-access-z8vtd" (OuterVolumeSpecName: "kube-api-access-z8vtd") pod "44342625-b514-4cb5-96cf-f59ccab74879" (UID: "44342625-b514-4cb5-96cf-f59ccab74879"). InnerVolumeSpecName "kube-api-access-z8vtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:22:09 crc kubenswrapper[4909]: I1126 08:22:09.154188 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8vtd\" (UniqueName: \"kubernetes.io/projected/44342625-b514-4cb5-96cf-f59ccab74879-kube-api-access-z8vtd\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:09 crc kubenswrapper[4909]: I1126 08:22:09.684543 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65dce7cc628b317f06d31bdc6355677cf43701b61641ea31eaeef1607f615659" Nov 26 08:22:09 crc kubenswrapper[4909]: I1126 08:22:09.684693 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 26 08:22:10 crc kubenswrapper[4909]: I1126 08:22:10.508829 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44342625-b514-4cb5-96cf-f59ccab74879" path="/var/lib/kubelet/pods/44342625-b514-4cb5-96cf-f59ccab74879/volumes" Nov 26 08:22:37 crc kubenswrapper[4909]: I1126 08:22:37.301564 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:22:37 crc kubenswrapper[4909]: I1126 08:22:37.302143 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.976353 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 08:22:40 crc kubenswrapper[4909]: E1126 08:22:40.977337 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44342625-b514-4cb5-96cf-f59ccab74879" containerName="mariadb-client" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.977355 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="44342625-b514-4cb5-96cf-f59ccab74879" containerName="mariadb-client" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.977537 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="44342625-b514-4cb5-96cf-f59ccab74879" containerName="mariadb-client" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.978446 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.981455 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.982052 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.982284 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-f48gq" Nov 26 08:22:40 crc kubenswrapper[4909]: I1126 08:22:40.992957 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.010906 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.016250 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.024712 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.026106 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.037298 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.045851 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b24639-0e06-417d-af87-ebf5829602d1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.045895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.045922 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b24639-0e06-417d-af87-ebf5829602d1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.046152 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b24639-0e06-417d-af87-ebf5829602d1-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.046225 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lplx\" (UniqueName: \"kubernetes.io/projected/e6b24639-0e06-417d-af87-ebf5829602d1-kube-api-access-7lplx\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.046283 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b24639-0e06-417d-af87-ebf5829602d1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.052394 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.148251 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8873dff6-99b5-4363-89bd-26a68d88372c-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.148506 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873dff6-99b5-4363-89bd-26a68d88372c-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.148647 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b24639-0e06-417d-af87-ebf5829602d1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.148762 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.148836 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de361675-9fe3-4b71-99e2-13b199c00514-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.148891 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b24639-0e06-417d-af87-ebf5829602d1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.148973 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de361675-9fe3-4b71-99e2-13b199c00514-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149079 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149162 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mvf6\" (UniqueName: \"kubernetes.io/projected/de361675-9fe3-4b71-99e2-13b199c00514-kube-api-access-6mvf6\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149217 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de361675-9fe3-4b71-99e2-13b199c00514-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149284 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f100f503-2781-4bd9-8604-097466f19a87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f100f503-2781-4bd9-8604-097466f19a87\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149332 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b24639-0e06-417d-af87-ebf5829602d1-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149385 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8873dff6-99b5-4363-89bd-26a68d88372c-config\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149436 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de361675-9fe3-4b71-99e2-13b199c00514-config\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149498 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lplx\" (UniqueName: \"kubernetes.io/projected/e6b24639-0e06-417d-af87-ebf5829602d1-kube-api-access-7lplx\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149548 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8873dff6-99b5-4363-89bd-26a68d88372c-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149631 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqm7q\" (UniqueName: \"kubernetes.io/projected/8873dff6-99b5-4363-89bd-26a68d88372c-kube-api-access-xqm7q\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.149800 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b24639-0e06-417d-af87-ebf5829602d1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.150214 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b24639-0e06-417d-af87-ebf5829602d1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.150245 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b24639-0e06-417d-af87-ebf5829602d1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.150546 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b24639-0e06-417d-af87-ebf5829602d1-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.151213 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.151251 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ca2a4b6e19f5d4f1029226f86b7a29f2dc5fb3bafca31a38ee72501b5151a5f1/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.157880 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b24639-0e06-417d-af87-ebf5829602d1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.167907 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lplx\" (UniqueName: \"kubernetes.io/projected/e6b24639-0e06-417d-af87-ebf5829602d1-kube-api-access-7lplx\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.198091 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77e84efb-e9f8-4b71-9564-50e78af9db0e\") pod \"ovsdbserver-nb-0\" (UID: \"e6b24639-0e06-417d-af87-ebf5829602d1\") " pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.198281 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.199765 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.204042 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.204254 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-srfst" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.204397 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.228986 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.251876 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de361675-9fe3-4b71-99e2-13b199c00514-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.251932 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ff76a4-16e1-4823-b620-72dea8981fa1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.251965 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.251994 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ff76a4-16e1-4823-b620-72dea8981fa1-config\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252029 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mvf6\" (UniqueName: \"kubernetes.io/projected/de361675-9fe3-4b71-99e2-13b199c00514-kube-api-access-6mvf6\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252056 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de361675-9fe3-4b71-99e2-13b199c00514-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252086 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f100f503-2781-4bd9-8604-097466f19a87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f100f503-2781-4bd9-8604-097466f19a87\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252113 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8873dff6-99b5-4363-89bd-26a68d88372c-config\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252135 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de361675-9fe3-4b71-99e2-13b199c00514-config\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252159 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kntj\" (UniqueName: \"kubernetes.io/projected/52ff76a4-16e1-4823-b620-72dea8981fa1-kube-api-access-2kntj\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252181 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8873dff6-99b5-4363-89bd-26a68d88372c-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252207 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqm7q\" (UniqueName: \"kubernetes.io/projected/8873dff6-99b5-4363-89bd-26a68d88372c-kube-api-access-xqm7q\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252240 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ff76a4-16e1-4823-b620-72dea8981fa1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252276 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8873dff6-99b5-4363-89bd-26a68d88372c-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252295 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52ff76a4-16e1-4823-b620-72dea8981fa1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252330 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252369 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873dff6-99b5-4363-89bd-26a68d88372c-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.252409 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de361675-9fe3-4b71-99e2-13b199c00514-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.253679 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de361675-9fe3-4b71-99e2-13b199c00514-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.253960 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de361675-9fe3-4b71-99e2-13b199c00514-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.255770 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8873dff6-99b5-4363-89bd-26a68d88372c-config\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.256092 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8873dff6-99b5-4363-89bd-26a68d88372c-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.256317 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de361675-9fe3-4b71-99e2-13b199c00514-config\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.256649 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8873dff6-99b5-4363-89bd-26a68d88372c-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.258239 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.259716 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.260557 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873dff6-99b5-4363-89bd-26a68d88372c-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.260694 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de361675-9fe3-4b71-99e2-13b199c00514-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.265945 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.265972 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f100f503-2781-4bd9-8604-097466f19a87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f100f503-2781-4bd9-8604-097466f19a87\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce321c2757d6805162ac77e88eca98ffc2869dc68524c5d02ea361ce81f638c5/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.267837 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.267881 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1a4d8c0d2793581217517ac1bf40b4d72834a6293d599fa928a5bd173fc8d95/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.275879 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.277738 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.284871 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mvf6\" (UniqueName: \"kubernetes.io/projected/de361675-9fe3-4b71-99e2-13b199c00514-kube-api-access-6mvf6\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.285835 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqm7q\" (UniqueName: \"kubernetes.io/projected/8873dff6-99b5-4363-89bd-26a68d88372c-kube-api-access-xqm7q\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.304630 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.306452 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.315220 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.318562 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f100f503-2781-4bd9-8604-097466f19a87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f100f503-2781-4bd9-8604-097466f19a87\") pod \"ovsdbserver-nb-2\" (UID: \"de361675-9fe3-4b71-99e2-13b199c00514\") " pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.319558 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e430cfc-0b12-4f7c-9ada-79801618cd69\") pod \"ovsdbserver-nb-1\" (UID: \"8873dff6-99b5-4363-89bd-26a68d88372c\") " pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.335190 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.345485 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353520 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ngp5\" (UniqueName: \"kubernetes.io/projected/d415596f-0580-4a05-8eda-40af3771f654-kube-api-access-9ngp5\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353584 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52ff76a4-16e1-4823-b620-72dea8981fa1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353650 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353690 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d415596f-0580-4a05-8eda-40af3771f654-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353720 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353746 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd987951-4bf9-486d-8188-d86e6812e03b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd987951-4bf9-486d-8188-d86e6812e03b\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353797 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d415596f-0580-4a05-8eda-40af3771f654-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353824 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ff76a4-16e1-4823-b620-72dea8981fa1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353854 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/502be803-759b-4d2a-93cc-10493cf5e482-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353877 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ff76a4-16e1-4823-b620-72dea8981fa1-config\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353900 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502be803-759b-4d2a-93cc-10493cf5e482-config\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353936 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d415596f-0580-4a05-8eda-40af3771f654-config\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353961 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/502be803-759b-4d2a-93cc-10493cf5e482-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.353992 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kntj\" (UniqueName: \"kubernetes.io/projected/52ff76a4-16e1-4823-b620-72dea8981fa1-kube-api-access-2kntj\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.354017 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/502be803-759b-4d2a-93cc-10493cf5e482-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.354049 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ff76a4-16e1-4823-b620-72dea8981fa1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.354077 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twd2v\" (UniqueName: \"kubernetes.io/projected/502be803-759b-4d2a-93cc-10493cf5e482-kube-api-access-twd2v\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.354105 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d415596f-0580-4a05-8eda-40af3771f654-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.355491 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ff76a4-16e1-4823-b620-72dea8981fa1-config\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.356115 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ff76a4-16e1-4823-b620-72dea8981fa1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.357017 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.357169 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99ff24c52c8739bc7736b05efbaad001abd9676c514d2eeed02492d2b6d42504/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.357707 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52ff76a4-16e1-4823-b620-72dea8981fa1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.359185 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ff76a4-16e1-4823-b620-72dea8981fa1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.371759 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kntj\" (UniqueName: \"kubernetes.io/projected/52ff76a4-16e1-4823-b620-72dea8981fa1-kube-api-access-2kntj\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.410921 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e271ba8-6ef7-41e3-85e5-cea2466150b6\") pod \"ovsdbserver-sb-0\" (UID: \"52ff76a4-16e1-4823-b620-72dea8981fa1\") " pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.454842 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ngp5\" (UniqueName: \"kubernetes.io/projected/d415596f-0580-4a05-8eda-40af3771f654-kube-api-access-9ngp5\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.454916 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d415596f-0580-4a05-8eda-40af3771f654-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.454947 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cd987951-4bf9-486d-8188-d86e6812e03b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd987951-4bf9-486d-8188-d86e6812e03b\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.454969 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455004 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d415596f-0580-4a05-8eda-40af3771f654-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455030 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/502be803-759b-4d2a-93cc-10493cf5e482-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455056 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502be803-759b-4d2a-93cc-10493cf5e482-config\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455131 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d415596f-0580-4a05-8eda-40af3771f654-config\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455187 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/502be803-759b-4d2a-93cc-10493cf5e482-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455224 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/502be803-759b-4d2a-93cc-10493cf5e482-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455287 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twd2v\" (UniqueName: \"kubernetes.io/projected/502be803-759b-4d2a-93cc-10493cf5e482-kube-api-access-twd2v\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.455368 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d415596f-0580-4a05-8eda-40af3771f654-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.456022 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/502be803-759b-4d2a-93cc-10493cf5e482-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.458036 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d415596f-0580-4a05-8eda-40af3771f654-config\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.458057 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d415596f-0580-4a05-8eda-40af3771f654-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.458783 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/502be803-759b-4d2a-93cc-10493cf5e482-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.459246 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502be803-759b-4d2a-93cc-10493cf5e482-config\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.461338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d415596f-0580-4a05-8eda-40af3771f654-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.461348 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.461402 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cd987951-4bf9-486d-8188-d86e6812e03b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd987951-4bf9-486d-8188-d86e6812e03b\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0a3c75a6f4b06eefd69a274db6223e3a98a2274b14a0b7513028906bf1acaaf3/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.461664 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.461690 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0111f483bf4bab3ef2c2bb9b9ab56b989adc5c277df4fc4443895b93f06c46d1/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.464406 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/502be803-759b-4d2a-93cc-10493cf5e482-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.467778 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d415596f-0580-4a05-8eda-40af3771f654-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.473854 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ngp5\" (UniqueName: \"kubernetes.io/projected/d415596f-0580-4a05-8eda-40af3771f654-kube-api-access-9ngp5\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.479048 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twd2v\" (UniqueName: \"kubernetes.io/projected/502be803-759b-4d2a-93cc-10493cf5e482-kube-api-access-twd2v\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.512057 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-754ce6d4-4728-4aa7-be16-234a798b9c4c\") pod \"ovsdbserver-sb-2\" (UID: \"502be803-759b-4d2a-93cc-10493cf5e482\") " pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.512262 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cd987951-4bf9-486d-8188-d86e6812e03b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd987951-4bf9-486d-8188-d86e6812e03b\") pod \"ovsdbserver-sb-1\" (UID: \"d415596f-0580-4a05-8eda-40af3771f654\") " pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.676786 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.689404 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.698172 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.856686 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.950760 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 26 08:22:41 crc kubenswrapper[4909]: I1126 08:22:41.957669 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b24639-0e06-417d-af87-ebf5829602d1","Type":"ContainerStarted","Data":"dbef05f0f565340dccd189552e705f5147daee77d889eee472e60edb314acb05"} Nov 26 08:22:41 crc kubenswrapper[4909]: W1126 08:22:41.959569 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8873dff6_99b5_4363_89bd_26a68d88372c.slice/crio-09b26688cb3cc37fddd732e1a902e92dad2cd5caa8349783ae9c85ed63e73c58 WatchSource:0}: Error finding container 09b26688cb3cc37fddd732e1a902e92dad2cd5caa8349783ae9c85ed63e73c58: Status 404 returned error can't find the container with id 09b26688cb3cc37fddd732e1a902e92dad2cd5caa8349783ae9c85ed63e73c58 Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.252418 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.332764 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 26 08:22:42 crc kubenswrapper[4909]: W1126 08:22:42.336053 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod502be803_759b_4d2a_93cc_10493cf5e482.slice/crio-9a8a5a73a3b4a54bcf9fbfebba4d914218b8ba7516e6bccb35a92aa967c0ba37 WatchSource:0}: Error finding container 9a8a5a73a3b4a54bcf9fbfebba4d914218b8ba7516e6bccb35a92aa967c0ba37: Status 404 returned error can't find the container with id 9a8a5a73a3b4a54bcf9fbfebba4d914218b8ba7516e6bccb35a92aa967c0ba37 Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.907499 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 26 08:22:42 crc kubenswrapper[4909]: W1126 08:22:42.909464 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde361675_9fe3_4b71_99e2_13b199c00514.slice/crio-70cd611e8f0999905452d15a18c09302cfb83100b4a6ecc1265b1a69b617793d WatchSource:0}: Error finding container 70cd611e8f0999905452d15a18c09302cfb83100b4a6ecc1265b1a69b617793d: Status 404 returned error can't find the container with id 70cd611e8f0999905452d15a18c09302cfb83100b4a6ecc1265b1a69b617793d Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.965601 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b24639-0e06-417d-af87-ebf5829602d1","Type":"ContainerStarted","Data":"10b1b06e507ef0387a52a87f8fc3e00d93b0cb569dee816fc72148c32ab6dbf1"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.965661 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b24639-0e06-417d-af87-ebf5829602d1","Type":"ContainerStarted","Data":"69a32db7fa12ac27b36550cce0e73229e44b6e583906b1810305de3a8833e1ae"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.966679 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"de361675-9fe3-4b71-99e2-13b199c00514","Type":"ContainerStarted","Data":"70cd611e8f0999905452d15a18c09302cfb83100b4a6ecc1265b1a69b617793d"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.970382 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"502be803-759b-4d2a-93cc-10493cf5e482","Type":"ContainerStarted","Data":"1bfac6c0cec22250c7407d66b1b015c8a327a8e1a5675463e298c5ad406c07ed"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.970435 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"502be803-759b-4d2a-93cc-10493cf5e482","Type":"ContainerStarted","Data":"a718bb3a5cca55defaeaac801431c680a394a685cfe4470eedef3108b78f6a0c"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.970449 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"502be803-759b-4d2a-93cc-10493cf5e482","Type":"ContainerStarted","Data":"9a8a5a73a3b4a54bcf9fbfebba4d914218b8ba7516e6bccb35a92aa967c0ba37"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.974305 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"52ff76a4-16e1-4823-b620-72dea8981fa1","Type":"ContainerStarted","Data":"668a0f5ce323517c2ba770aad5bd27267196462fbb4b9167e6a20150bd5c54de"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.974343 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"52ff76a4-16e1-4823-b620-72dea8981fa1","Type":"ContainerStarted","Data":"669ad2d642a6768daa033746d737b1cfde3572282b2dbea36ba5de86884b7b78"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.974354 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"52ff76a4-16e1-4823-b620-72dea8981fa1","Type":"ContainerStarted","Data":"9a0ab067711490bdbc2aa5c134c72f2b466bd00f2efa41a000af8726e048baff"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.976146 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8873dff6-99b5-4363-89bd-26a68d88372c","Type":"ContainerStarted","Data":"63521e2f21a12b01886f30a6f660efd0e8828ec39e5a7f0f34f88e1bb3b647f9"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.976186 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8873dff6-99b5-4363-89bd-26a68d88372c","Type":"ContainerStarted","Data":"3986ac088dec21a4481b854bf8570646c35f902d84fd124716a356c35b71ec4d"} Nov 26 08:22:42 crc kubenswrapper[4909]: I1126 08:22:42.976196 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8873dff6-99b5-4363-89bd-26a68d88372c","Type":"ContainerStarted","Data":"09b26688cb3cc37fddd732e1a902e92dad2cd5caa8349783ae9c85ed63e73c58"} Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.000669 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.000649651 podStartE2EDuration="3.000649651s" podCreationTimestamp="2025-11-26 08:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:42.997502385 +0000 UTC m=+4935.143713551" watchObservedRunningTime="2025-11-26 08:22:43.000649651 +0000 UTC m=+4935.146860827" Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.007426 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.007405325 podStartE2EDuration="4.007405325s" podCreationTimestamp="2025-11-26 08:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:42.983494973 +0000 UTC m=+4935.129706139" watchObservedRunningTime="2025-11-26 08:22:43.007405325 +0000 UTC m=+4935.153616491" Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.017297 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.017278494 podStartE2EDuration="4.017278494s" podCreationTimestamp="2025-11-26 08:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:43.015921848 +0000 UTC m=+4935.162133004" watchObservedRunningTime="2025-11-26 08:22:43.017278494 +0000 UTC m=+4935.163489660" Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.035945 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.035926683 podStartE2EDuration="3.035926683s" podCreationTimestamp="2025-11-26 08:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:43.032648054 +0000 UTC m=+4935.178859220" watchObservedRunningTime="2025-11-26 08:22:43.035926683 +0000 UTC m=+4935.182137849" Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.113442 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 26 08:22:43 crc kubenswrapper[4909]: W1126 08:22:43.128978 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd415596f_0580_4a05_8eda_40af3771f654.slice/crio-282c99e83d8fc4c16083e8f934fb70ce5feb8813b8c586f4cbcdfed1ba7701bc WatchSource:0}: Error finding container 282c99e83d8fc4c16083e8f934fb70ce5feb8813b8c586f4cbcdfed1ba7701bc: Status 404 returned error can't find the container with id 282c99e83d8fc4c16083e8f934fb70ce5feb8813b8c586f4cbcdfed1ba7701bc Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.987970 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"d415596f-0580-4a05-8eda-40af3771f654","Type":"ContainerStarted","Data":"c5e5873c170a87690091490dd6857ec4a3bbe0bddea991b6ada673dc401b0c64"} Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.988318 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"d415596f-0580-4a05-8eda-40af3771f654","Type":"ContainerStarted","Data":"e344ed5af74f2851a4f1f67cdfd6ff34bbc89e04d486093be98919d3b4226edd"} Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.988332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"d415596f-0580-4a05-8eda-40af3771f654","Type":"ContainerStarted","Data":"282c99e83d8fc4c16083e8f934fb70ce5feb8813b8c586f4cbcdfed1ba7701bc"} Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.991776 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"de361675-9fe3-4b71-99e2-13b199c00514","Type":"ContainerStarted","Data":"5faec81ea59aaafa2aed8f1d4699cc20e030f57d5459c96969f9253410a3c313"} Nov 26 08:22:43 crc kubenswrapper[4909]: I1126 08:22:43.991847 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"de361675-9fe3-4b71-99e2-13b199c00514","Type":"ContainerStarted","Data":"d15eb25c7918ef9b461ad297d7f6618a79a49fdf762a011c5c1b72a19905947f"} Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.046583 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.046568009 podStartE2EDuration="4.046568009s" podCreationTimestamp="2025-11-26 08:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:44.045245962 +0000 UTC m=+4936.191457128" watchObservedRunningTime="2025-11-26 08:22:44.046568009 +0000 UTC m=+4936.192779195" Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.065214 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=5.065199046 podStartE2EDuration="5.065199046s" podCreationTimestamp="2025-11-26 08:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:44.061517816 +0000 UTC m=+4936.207728982" watchObservedRunningTime="2025-11-26 08:22:44.065199046 +0000 UTC m=+4936.211410212" Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.307635 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.335385 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.346072 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.677670 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.689822 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:44 crc kubenswrapper[4909]: I1126 08:22:44.698937 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:46 crc kubenswrapper[4909]: I1126 08:22:46.307552 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:46 crc kubenswrapper[4909]: I1126 08:22:46.336227 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:46 crc kubenswrapper[4909]: I1126 08:22:46.346130 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:46 crc kubenswrapper[4909]: I1126 08:22:46.677279 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:46 crc kubenswrapper[4909]: I1126 08:22:46.692786 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:46 crc kubenswrapper[4909]: I1126 08:22:46.698423 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.348378 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.375953 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.382180 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.399676 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.417878 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.674018 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-ldctd"] Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.675693 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.679616 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.690105 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-ldctd"] Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.740812 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.746390 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.752289 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.776510 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-dns-svc\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.776572 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4jdw\" (UniqueName: \"kubernetes.io/projected/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-kube-api-access-w4jdw\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.776638 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-ovsdbserver-nb\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.776723 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-config\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.787336 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.788947 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.878145 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-config\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.878261 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-dns-svc\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.878284 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4jdw\" (UniqueName: \"kubernetes.io/projected/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-kube-api-access-w4jdw\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.878310 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-ovsdbserver-nb\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.879418 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-ovsdbserver-nb\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.879789 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-dns-svc\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.879918 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-config\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:47 crc kubenswrapper[4909]: I1126 08:22:47.906918 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4jdw\" (UniqueName: \"kubernetes.io/projected/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-kube-api-access-w4jdw\") pod \"dnsmasq-dns-547968cc8f-ldctd\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.003821 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.097721 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.108363 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.202671 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-ldctd"] Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.233925 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c54468fdc-w74nb"] Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.239226 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.250707 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.252475 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c54468fdc-w74nb"] Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.288942 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.289394 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-config\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.290315 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9wpm\" (UniqueName: \"kubernetes.io/projected/e49fc00a-2715-47f2-858c-606a04d262b8-kube-api-access-k9wpm\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.290459 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-dns-svc\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.290525 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.391553 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.391629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.391653 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-config\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.391734 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9wpm\" (UniqueName: \"kubernetes.io/projected/e49fc00a-2715-47f2-858c-606a04d262b8-kube-api-access-k9wpm\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.391790 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-dns-svc\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.392644 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.392706 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.393298 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-dns-svc\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.393443 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-config\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.411857 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9wpm\" (UniqueName: \"kubernetes.io/projected/e49fc00a-2715-47f2-858c-606a04d262b8-kube-api-access-k9wpm\") pod \"dnsmasq-dns-7c54468fdc-w74nb\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.580433 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:48 crc kubenswrapper[4909]: I1126 08:22:48.640235 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-ldctd"] Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.037363 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c54468fdc-w74nb"] Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.038644 4909 generic.go:334] "Generic (PLEG): container finished" podID="cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" containerID="c107cba0a8c7f139213d48b5327f404b2579f93ae4a8b654b4c3eb96281e3d48" exitCode=0 Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.038724 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-ldctd" event={"ID":"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f","Type":"ContainerDied","Data":"c107cba0a8c7f139213d48b5327f404b2579f93ae4a8b654b4c3eb96281e3d48"} Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.038763 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-ldctd" event={"ID":"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f","Type":"ContainerStarted","Data":"c6a51eb36e9b5e691a9f1b4743e31d1aeb95725c6bb80b1b1383dbb252a18273"} Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.367528 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.422617 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-config\") pod \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.422739 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-dns-svc\") pod \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.422909 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-ovsdbserver-nb\") pod \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.422936 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4jdw\" (UniqueName: \"kubernetes.io/projected/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-kube-api-access-w4jdw\") pod \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\" (UID: \"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f\") " Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.427128 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-kube-api-access-w4jdw" (OuterVolumeSpecName: "kube-api-access-w4jdw") pod "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" (UID: "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f"). InnerVolumeSpecName "kube-api-access-w4jdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.441461 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" (UID: "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.442008 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-config" (OuterVolumeSpecName: "config") pod "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" (UID: "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.447680 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" (UID: "cd58d6e3-6e87-4cb2-bec5-24844b2ad10f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.525310 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.525346 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.525362 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4jdw\" (UniqueName: \"kubernetes.io/projected/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-kube-api-access-w4jdw\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:49 crc kubenswrapper[4909]: I1126 08:22:49.525376 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.056195 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-ldctd" event={"ID":"cd58d6e3-6e87-4cb2-bec5-24844b2ad10f","Type":"ContainerDied","Data":"c6a51eb36e9b5e691a9f1b4743e31d1aeb95725c6bb80b1b1383dbb252a18273"} Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.056734 4909 scope.go:117] "RemoveContainer" containerID="c107cba0a8c7f139213d48b5327f404b2579f93ae4a8b654b4c3eb96281e3d48" Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.056489 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-ldctd" Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.060383 4909 generic.go:334] "Generic (PLEG): container finished" podID="e49fc00a-2715-47f2-858c-606a04d262b8" containerID="9da984132766679b49bddcee6458260bc6ab8ee6a34c04a3987d851b3318508b" exitCode=0 Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.060432 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" event={"ID":"e49fc00a-2715-47f2-858c-606a04d262b8","Type":"ContainerDied","Data":"9da984132766679b49bddcee6458260bc6ab8ee6a34c04a3987d851b3318508b"} Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.060535 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" event={"ID":"e49fc00a-2715-47f2-858c-606a04d262b8","Type":"ContainerStarted","Data":"21cc6349302e37c40f09c5f1cfb07f38a94c61b171df0cfdc8f3850a4e179059"} Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.249037 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-ldctd"] Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.263721 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-ldctd"] Nov 26 08:22:50 crc kubenswrapper[4909]: I1126 08:22:50.508481 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" path="/var/lib/kubelet/pods/cd58d6e3-6e87-4cb2-bec5-24844b2ad10f/volumes" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.013528 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 26 08:22:51 crc kubenswrapper[4909]: E1126 08:22:51.014307 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" containerName="init" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.014323 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" containerName="init" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.014563 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd58d6e3-6e87-4cb2-bec5-24844b2ad10f" containerName="init" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.015583 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.023947 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.031907 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.384047 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c84be150-311b-43e8-972d-6239b995b74c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.384114 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ade07b13-f382-46fa-805b-3c6d479a6a13-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.384173 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59b2\" (UniqueName: \"kubernetes.io/projected/ade07b13-f382-46fa-805b-3c6d479a6a13-kube-api-access-z59b2\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.424503 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" event={"ID":"e49fc00a-2715-47f2-858c-606a04d262b8","Type":"ContainerStarted","Data":"148390318a352fbed1919f42225d443d21316df99cd0770883277ad45ff57d53"} Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.426178 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.473491 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" podStartSLOduration=3.473470369 podStartE2EDuration="3.473470369s" podCreationTimestamp="2025-11-26 08:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:51.461295196 +0000 UTC m=+4943.607506382" watchObservedRunningTime="2025-11-26 08:22:51.473470369 +0000 UTC m=+4943.619681555" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.486213 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59b2\" (UniqueName: \"kubernetes.io/projected/ade07b13-f382-46fa-805b-3c6d479a6a13-kube-api-access-z59b2\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.486443 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c84be150-311b-43e8-972d-6239b995b74c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.486500 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ade07b13-f382-46fa-805b-3c6d479a6a13-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.493549 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ade07b13-f382-46fa-805b-3c6d479a6a13-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.497527 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.497568 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c84be150-311b-43e8-972d-6239b995b74c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1445bbabae7dad0e4dc6c423a6a4272dda7ec4a3c0c72e562cea5e581fe0ae4b/globalmount\"" pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.510429 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59b2\" (UniqueName: \"kubernetes.io/projected/ade07b13-f382-46fa-805b-3c6d479a6a13-kube-api-access-z59b2\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.527230 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c84be150-311b-43e8-972d-6239b995b74c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c\") pod \"ovn-copy-data\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " pod="openstack/ovn-copy-data" Nov 26 08:22:51 crc kubenswrapper[4909]: I1126 08:22:51.717992 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 26 08:22:52 crc kubenswrapper[4909]: I1126 08:22:52.115509 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 26 08:22:52 crc kubenswrapper[4909]: W1126 08:22:52.120769 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade07b13_f382_46fa_805b_3c6d479a6a13.slice/crio-e9e5310c8d0f6d67cd7a4adcaa476fda78fe890bc206ba3cbb5efbbac583fcb4 WatchSource:0}: Error finding container e9e5310c8d0f6d67cd7a4adcaa476fda78fe890bc206ba3cbb5efbbac583fcb4: Status 404 returned error can't find the container with id e9e5310c8d0f6d67cd7a4adcaa476fda78fe890bc206ba3cbb5efbbac583fcb4 Nov 26 08:22:52 crc kubenswrapper[4909]: I1126 08:22:52.440096 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ade07b13-f382-46fa-805b-3c6d479a6a13","Type":"ContainerStarted","Data":"884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5"} Nov 26 08:22:52 crc kubenswrapper[4909]: I1126 08:22:52.440173 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ade07b13-f382-46fa-805b-3c6d479a6a13","Type":"ContainerStarted","Data":"e9e5310c8d0f6d67cd7a4adcaa476fda78fe890bc206ba3cbb5efbbac583fcb4"} Nov 26 08:22:52 crc kubenswrapper[4909]: I1126 08:22:52.466988 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.466965326 podStartE2EDuration="3.466965326s" podCreationTimestamp="2025-11-26 08:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:52.458462124 +0000 UTC m=+4944.604673310" watchObservedRunningTime="2025-11-26 08:22:52.466965326 +0000 UTC m=+4944.613176512" Nov 26 08:22:57 crc kubenswrapper[4909]: I1126 08:22:57.973257 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 26 08:22:57 crc kubenswrapper[4909]: I1126 08:22:57.975118 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 08:22:57 crc kubenswrapper[4909]: I1126 08:22:57.979128 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 26 08:22:57 crc kubenswrapper[4909]: I1126 08:22:57.980869 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 26 08:22:57 crc kubenswrapper[4909]: I1126 08:22:57.985839 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qttb8" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.000148 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.095445 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1738be25-4013-47d1-b3c0-28ba45749d59-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.095843 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1738be25-4013-47d1-b3c0-28ba45749d59-scripts\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.096063 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1738be25-4013-47d1-b3c0-28ba45749d59-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.096200 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdrl\" (UniqueName: \"kubernetes.io/projected/1738be25-4013-47d1-b3c0-28ba45749d59-kube-api-access-sxdrl\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.096388 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1738be25-4013-47d1-b3c0-28ba45749d59-config\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.198391 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1738be25-4013-47d1-b3c0-28ba45749d59-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.198497 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1738be25-4013-47d1-b3c0-28ba45749d59-scripts\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.198770 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1738be25-4013-47d1-b3c0-28ba45749d59-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.198802 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdrl\" (UniqueName: \"kubernetes.io/projected/1738be25-4013-47d1-b3c0-28ba45749d59-kube-api-access-sxdrl\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.198831 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1738be25-4013-47d1-b3c0-28ba45749d59-config\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.199661 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1738be25-4013-47d1-b3c0-28ba45749d59-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.200051 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1738be25-4013-47d1-b3c0-28ba45749d59-config\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.200267 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1738be25-4013-47d1-b3c0-28ba45749d59-scripts\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.205822 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1738be25-4013-47d1-b3c0-28ba45749d59-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.216806 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdrl\" (UniqueName: \"kubernetes.io/projected/1738be25-4013-47d1-b3c0-28ba45749d59-kube-api-access-sxdrl\") pod \"ovn-northd-0\" (UID: \"1738be25-4013-47d1-b3c0-28ba45749d59\") " pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.305241 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.585176 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.644830 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-fpkg5"] Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.645386 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" podUID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerName="dnsmasq-dns" containerID="cri-o://43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2" gracePeriod=10 Nov 26 08:22:58 crc kubenswrapper[4909]: I1126 08:22:58.833655 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.077394 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:22:59 crc kubenswrapper[4909]: E1126 08:22:59.080811 4909 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.206:57476->38.129.56.206:33469: write tcp 38.129.56.206:57476->38.129.56.206:33469: write: connection reset by peer Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.117182 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lctcb\" (UniqueName: \"kubernetes.io/projected/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-kube-api-access-lctcb\") pod \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.117278 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-dns-svc\") pod \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.117468 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-config\") pod \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\" (UID: \"0f8b29f5-e339-4947-a7b5-a68d6dfaced6\") " Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.122501 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-kube-api-access-lctcb" (OuterVolumeSpecName: "kube-api-access-lctcb") pod "0f8b29f5-e339-4947-a7b5-a68d6dfaced6" (UID: "0f8b29f5-e339-4947-a7b5-a68d6dfaced6"). InnerVolumeSpecName "kube-api-access-lctcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.157507 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-config" (OuterVolumeSpecName: "config") pod "0f8b29f5-e339-4947-a7b5-a68d6dfaced6" (UID: "0f8b29f5-e339-4947-a7b5-a68d6dfaced6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.159265 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f8b29f5-e339-4947-a7b5-a68d6dfaced6" (UID: "0f8b29f5-e339-4947-a7b5-a68d6dfaced6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.219840 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.219874 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lctcb\" (UniqueName: \"kubernetes.io/projected/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-kube-api-access-lctcb\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.219887 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f8b29f5-e339-4947-a7b5-a68d6dfaced6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.504575 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1738be25-4013-47d1-b3c0-28ba45749d59","Type":"ContainerStarted","Data":"995175efe9bd12570054a50e5ed30994054ec8e9742cf04a30d72c68c74ec9e0"} Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.505247 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1738be25-4013-47d1-b3c0-28ba45749d59","Type":"ContainerStarted","Data":"f72f32e44c54159b0fb31fe193ae1eac4de1245e5bbde7cb28fcb0cceb737096"} Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.505297 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1738be25-4013-47d1-b3c0-28ba45749d59","Type":"ContainerStarted","Data":"acd79baa255f9f5c5e525e0d0e83ad7c6487939acd2b81397331a2a0bbcac191"} Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.505340 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.508719 4909 generic.go:334] "Generic (PLEG): container finished" podID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerID="43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2" exitCode=0 Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.508769 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.508767 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" event={"ID":"0f8b29f5-e339-4947-a7b5-a68d6dfaced6","Type":"ContainerDied","Data":"43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2"} Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.508888 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-fpkg5" event={"ID":"0f8b29f5-e339-4947-a7b5-a68d6dfaced6","Type":"ContainerDied","Data":"f6fb6eb47d2cc943664a86346151f343055d8b5e5f1e928601bf8646d30a924a"} Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.508914 4909 scope.go:117] "RemoveContainer" containerID="43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.539569 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.539545916 podStartE2EDuration="2.539545916s" podCreationTimestamp="2025-11-26 08:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:22:59.526189492 +0000 UTC m=+4951.672400688" watchObservedRunningTime="2025-11-26 08:22:59.539545916 +0000 UTC m=+4951.685757092" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.550876 4909 scope.go:117] "RemoveContainer" containerID="f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.558412 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-fpkg5"] Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.571123 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-fpkg5"] Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.580220 4909 scope.go:117] "RemoveContainer" containerID="43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2" Nov 26 08:22:59 crc kubenswrapper[4909]: E1126 08:22:59.580722 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2\": container with ID starting with 43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2 not found: ID does not exist" containerID="43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.580915 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2"} err="failed to get container status \"43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2\": rpc error: code = NotFound desc = could not find container \"43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2\": container with ID starting with 43b1ac5175ab00044ce299d3fb4fe47a72a20e0ca931cfb74849d5a82bde0ce2 not found: ID does not exist" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.581024 4909 scope.go:117] "RemoveContainer" containerID="f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73" Nov 26 08:22:59 crc kubenswrapper[4909]: E1126 08:22:59.583028 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73\": container with ID starting with f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73 not found: ID does not exist" containerID="f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73" Nov 26 08:22:59 crc kubenswrapper[4909]: I1126 08:22:59.583057 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73"} err="failed to get container status \"f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73\": rpc error: code = NotFound desc = could not find container \"f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73\": container with ID starting with f05233d3cc652e2b69d949af31e8e06962cbf05b84e3ea2dbe497d7189c75d73 not found: ID does not exist" Nov 26 08:23:00 crc kubenswrapper[4909]: I1126 08:23:00.519053 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" path="/var/lib/kubelet/pods/0f8b29f5-e339-4947-a7b5-a68d6dfaced6/volumes" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.425749 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8vbph"] Nov 26 08:23:03 crc kubenswrapper[4909]: E1126 08:23:03.426276 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerName="init" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.426288 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerName="init" Nov 26 08:23:03 crc kubenswrapper[4909]: E1126 08:23:03.426315 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerName="dnsmasq-dns" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.426322 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerName="dnsmasq-dns" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.426481 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8b29f5-e339-4947-a7b5-a68d6dfaced6" containerName="dnsmasq-dns" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.426997 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8vbph" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.435785 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8vbph"] Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.503773 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkqzp\" (UniqueName: \"kubernetes.io/projected/fd240cfb-fc3a-4822-8155-475932071966-kube-api-access-rkqzp\") pod \"keystone-db-create-8vbph\" (UID: \"fd240cfb-fc3a-4822-8155-475932071966\") " pod="openstack/keystone-db-create-8vbph" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.605323 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkqzp\" (UniqueName: \"kubernetes.io/projected/fd240cfb-fc3a-4822-8155-475932071966-kube-api-access-rkqzp\") pod \"keystone-db-create-8vbph\" (UID: \"fd240cfb-fc3a-4822-8155-475932071966\") " pod="openstack/keystone-db-create-8vbph" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.625239 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkqzp\" (UniqueName: \"kubernetes.io/projected/fd240cfb-fc3a-4822-8155-475932071966-kube-api-access-rkqzp\") pod \"keystone-db-create-8vbph\" (UID: \"fd240cfb-fc3a-4822-8155-475932071966\") " pod="openstack/keystone-db-create-8vbph" Nov 26 08:23:03 crc kubenswrapper[4909]: I1126 08:23:03.744374 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8vbph" Nov 26 08:23:04 crc kubenswrapper[4909]: I1126 08:23:04.159505 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8vbph"] Nov 26 08:23:04 crc kubenswrapper[4909]: I1126 08:23:04.548382 4909 generic.go:334] "Generic (PLEG): container finished" podID="fd240cfb-fc3a-4822-8155-475932071966" containerID="4e65a48844d9443182612718e74b6adb9e165066ec8f7ab159ee9c091f238d94" exitCode=0 Nov 26 08:23:04 crc kubenswrapper[4909]: I1126 08:23:04.548445 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8vbph" event={"ID":"fd240cfb-fc3a-4822-8155-475932071966","Type":"ContainerDied","Data":"4e65a48844d9443182612718e74b6adb9e165066ec8f7ab159ee9c091f238d94"} Nov 26 08:23:04 crc kubenswrapper[4909]: I1126 08:23:04.548745 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8vbph" event={"ID":"fd240cfb-fc3a-4822-8155-475932071966","Type":"ContainerStarted","Data":"758873533f6c23b5866894208e9caf7334dd272863f8b51bcf3017fcfba80ffb"} Nov 26 08:23:05 crc kubenswrapper[4909]: I1126 08:23:05.880444 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8vbph" Nov 26 08:23:05 crc kubenswrapper[4909]: I1126 08:23:05.945081 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkqzp\" (UniqueName: \"kubernetes.io/projected/fd240cfb-fc3a-4822-8155-475932071966-kube-api-access-rkqzp\") pod \"fd240cfb-fc3a-4822-8155-475932071966\" (UID: \"fd240cfb-fc3a-4822-8155-475932071966\") " Nov 26 08:23:05 crc kubenswrapper[4909]: I1126 08:23:05.953079 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd240cfb-fc3a-4822-8155-475932071966-kube-api-access-rkqzp" (OuterVolumeSpecName: "kube-api-access-rkqzp") pod "fd240cfb-fc3a-4822-8155-475932071966" (UID: "fd240cfb-fc3a-4822-8155-475932071966"). InnerVolumeSpecName "kube-api-access-rkqzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:23:06 crc kubenswrapper[4909]: I1126 08:23:06.046890 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkqzp\" (UniqueName: \"kubernetes.io/projected/fd240cfb-fc3a-4822-8155-475932071966-kube-api-access-rkqzp\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:06 crc kubenswrapper[4909]: I1126 08:23:06.570258 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8vbph" event={"ID":"fd240cfb-fc3a-4822-8155-475932071966","Type":"ContainerDied","Data":"758873533f6c23b5866894208e9caf7334dd272863f8b51bcf3017fcfba80ffb"} Nov 26 08:23:06 crc kubenswrapper[4909]: I1126 08:23:06.570654 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="758873533f6c23b5866894208e9caf7334dd272863f8b51bcf3017fcfba80ffb" Nov 26 08:23:06 crc kubenswrapper[4909]: I1126 08:23:06.570497 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8vbph" Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.301752 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.301842 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.301950 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.303230 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.303456 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" gracePeriod=600 Nov 26 08:23:07 crc kubenswrapper[4909]: E1126 08:23:07.422298 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.579160 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" exitCode=0 Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.579236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36"} Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.579487 4909 scope.go:117] "RemoveContainer" containerID="1d02eb9ff541f068efba31560f6d49bd8fa1db4909c006e87655b88c2586e0c0" Nov 26 08:23:07 crc kubenswrapper[4909]: I1126 08:23:07.579938 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:23:07 crc kubenswrapper[4909]: E1126 08:23:07.580184 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:23:08 crc kubenswrapper[4909]: I1126 08:23:08.374449 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.403423 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e378-account-create-lc99q"] Nov 26 08:23:13 crc kubenswrapper[4909]: E1126 08:23:13.404362 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd240cfb-fc3a-4822-8155-475932071966" containerName="mariadb-database-create" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.404376 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd240cfb-fc3a-4822-8155-475932071966" containerName="mariadb-database-create" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.404563 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd240cfb-fc3a-4822-8155-475932071966" containerName="mariadb-database-create" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.405123 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e378-account-create-lc99q" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.413852 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.416571 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e378-account-create-lc99q"] Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.489390 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthg5\" (UniqueName: \"kubernetes.io/projected/44da67f6-1772-47c0-9bbd-d2b793f0a84e-kube-api-access-bthg5\") pod \"keystone-e378-account-create-lc99q\" (UID: \"44da67f6-1772-47c0-9bbd-d2b793f0a84e\") " pod="openstack/keystone-e378-account-create-lc99q" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.590765 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthg5\" (UniqueName: \"kubernetes.io/projected/44da67f6-1772-47c0-9bbd-d2b793f0a84e-kube-api-access-bthg5\") pod \"keystone-e378-account-create-lc99q\" (UID: \"44da67f6-1772-47c0-9bbd-d2b793f0a84e\") " pod="openstack/keystone-e378-account-create-lc99q" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.619701 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthg5\" (UniqueName: \"kubernetes.io/projected/44da67f6-1772-47c0-9bbd-d2b793f0a84e-kube-api-access-bthg5\") pod \"keystone-e378-account-create-lc99q\" (UID: \"44da67f6-1772-47c0-9bbd-d2b793f0a84e\") " pod="openstack/keystone-e378-account-create-lc99q" Nov 26 08:23:13 crc kubenswrapper[4909]: I1126 08:23:13.739050 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e378-account-create-lc99q" Nov 26 08:23:14 crc kubenswrapper[4909]: I1126 08:23:14.243086 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e378-account-create-lc99q"] Nov 26 08:23:14 crc kubenswrapper[4909]: I1126 08:23:14.660676 4909 generic.go:334] "Generic (PLEG): container finished" podID="44da67f6-1772-47c0-9bbd-d2b793f0a84e" containerID="67b87a264811d9c1603b864be639c0472010eae91263b0459ac1761d283ebeb6" exitCode=0 Nov 26 08:23:14 crc kubenswrapper[4909]: I1126 08:23:14.660807 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e378-account-create-lc99q" event={"ID":"44da67f6-1772-47c0-9bbd-d2b793f0a84e","Type":"ContainerDied","Data":"67b87a264811d9c1603b864be639c0472010eae91263b0459ac1761d283ebeb6"} Nov 26 08:23:14 crc kubenswrapper[4909]: I1126 08:23:14.660947 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e378-account-create-lc99q" event={"ID":"44da67f6-1772-47c0-9bbd-d2b793f0a84e","Type":"ContainerStarted","Data":"fcfb0df27bcddb20a693f9da33c9d0dc7f599c1a8e70f13ed9053255a16381d9"} Nov 26 08:23:16 crc kubenswrapper[4909]: I1126 08:23:16.059082 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e378-account-create-lc99q" Nov 26 08:23:16 crc kubenswrapper[4909]: I1126 08:23:16.128841 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bthg5\" (UniqueName: \"kubernetes.io/projected/44da67f6-1772-47c0-9bbd-d2b793f0a84e-kube-api-access-bthg5\") pod \"44da67f6-1772-47c0-9bbd-d2b793f0a84e\" (UID: \"44da67f6-1772-47c0-9bbd-d2b793f0a84e\") " Nov 26 08:23:16 crc kubenswrapper[4909]: I1126 08:23:16.133953 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44da67f6-1772-47c0-9bbd-d2b793f0a84e-kube-api-access-bthg5" (OuterVolumeSpecName: "kube-api-access-bthg5") pod "44da67f6-1772-47c0-9bbd-d2b793f0a84e" (UID: "44da67f6-1772-47c0-9bbd-d2b793f0a84e"). InnerVolumeSpecName "kube-api-access-bthg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:23:16 crc kubenswrapper[4909]: I1126 08:23:16.230858 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bthg5\" (UniqueName: \"kubernetes.io/projected/44da67f6-1772-47c0-9bbd-d2b793f0a84e-kube-api-access-bthg5\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:16 crc kubenswrapper[4909]: I1126 08:23:16.681884 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e378-account-create-lc99q" event={"ID":"44da67f6-1772-47c0-9bbd-d2b793f0a84e","Type":"ContainerDied","Data":"fcfb0df27bcddb20a693f9da33c9d0dc7f599c1a8e70f13ed9053255a16381d9"} Nov 26 08:23:16 crc kubenswrapper[4909]: I1126 08:23:16.681922 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e378-account-create-lc99q" Nov 26 08:23:16 crc kubenswrapper[4909]: I1126 08:23:16.681929 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcfb0df27bcddb20a693f9da33c9d0dc7f599c1a8e70f13ed9053255a16381d9" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.507707 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:23:18 crc kubenswrapper[4909]: E1126 08:23:18.507993 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.800354 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hsxh2"] Nov 26 08:23:18 crc kubenswrapper[4909]: E1126 08:23:18.801424 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44da67f6-1772-47c0-9bbd-d2b793f0a84e" containerName="mariadb-account-create" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.801540 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="44da67f6-1772-47c0-9bbd-d2b793f0a84e" containerName="mariadb-account-create" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.801872 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="44da67f6-1772-47c0-9bbd-d2b793f0a84e" containerName="mariadb-account-create" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.802857 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.805133 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.805198 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgr6r" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.805448 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.805645 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.811655 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hsxh2"] Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.872063 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-combined-ca-bundle\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.872122 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg8l4\" (UniqueName: \"kubernetes.io/projected/74022312-19ed-4ed7-b5d7-03a842e1de8e-kube-api-access-xg8l4\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.872170 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-config-data\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.973877 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-combined-ca-bundle\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.974183 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg8l4\" (UniqueName: \"kubernetes.io/projected/74022312-19ed-4ed7-b5d7-03a842e1de8e-kube-api-access-xg8l4\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.974366 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-config-data\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.980424 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-combined-ca-bundle\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.981369 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-config-data\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:18 crc kubenswrapper[4909]: I1126 08:23:18.994080 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg8l4\" (UniqueName: \"kubernetes.io/projected/74022312-19ed-4ed7-b5d7-03a842e1de8e-kube-api-access-xg8l4\") pod \"keystone-db-sync-hsxh2\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:19 crc kubenswrapper[4909]: I1126 08:23:19.139373 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:19 crc kubenswrapper[4909]: I1126 08:23:19.619391 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hsxh2"] Nov 26 08:23:19 crc kubenswrapper[4909]: I1126 08:23:19.706188 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hsxh2" event={"ID":"74022312-19ed-4ed7-b5d7-03a842e1de8e","Type":"ContainerStarted","Data":"d1fe0780ae10ee7748f840331f7f809d2c2b71ecb1de0f73a37bcea5c27e348d"} Nov 26 08:23:20 crc kubenswrapper[4909]: I1126 08:23:20.718684 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hsxh2" event={"ID":"74022312-19ed-4ed7-b5d7-03a842e1de8e","Type":"ContainerStarted","Data":"8794fb03db07a049ffa784e3a45079b185a38d397cdd470d942bbb017487b203"} Nov 26 08:23:20 crc kubenswrapper[4909]: I1126 08:23:20.746433 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hsxh2" podStartSLOduration=2.746413334 podStartE2EDuration="2.746413334s" podCreationTimestamp="2025-11-26 08:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:23:20.746098416 +0000 UTC m=+4972.892309582" watchObservedRunningTime="2025-11-26 08:23:20.746413334 +0000 UTC m=+4972.892624500" Nov 26 08:23:21 crc kubenswrapper[4909]: I1126 08:23:21.732620 4909 generic.go:334] "Generic (PLEG): container finished" podID="74022312-19ed-4ed7-b5d7-03a842e1de8e" containerID="8794fb03db07a049ffa784e3a45079b185a38d397cdd470d942bbb017487b203" exitCode=0 Nov 26 08:23:21 crc kubenswrapper[4909]: I1126 08:23:21.732661 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hsxh2" event={"ID":"74022312-19ed-4ed7-b5d7-03a842e1de8e","Type":"ContainerDied","Data":"8794fb03db07a049ffa784e3a45079b185a38d397cdd470d942bbb017487b203"} Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.124521 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.155891 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-combined-ca-bundle\") pod \"74022312-19ed-4ed7-b5d7-03a842e1de8e\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.156033 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg8l4\" (UniqueName: \"kubernetes.io/projected/74022312-19ed-4ed7-b5d7-03a842e1de8e-kube-api-access-xg8l4\") pod \"74022312-19ed-4ed7-b5d7-03a842e1de8e\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.156128 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-config-data\") pod \"74022312-19ed-4ed7-b5d7-03a842e1de8e\" (UID: \"74022312-19ed-4ed7-b5d7-03a842e1de8e\") " Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.161154 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74022312-19ed-4ed7-b5d7-03a842e1de8e-kube-api-access-xg8l4" (OuterVolumeSpecName: "kube-api-access-xg8l4") pod "74022312-19ed-4ed7-b5d7-03a842e1de8e" (UID: "74022312-19ed-4ed7-b5d7-03a842e1de8e"). InnerVolumeSpecName "kube-api-access-xg8l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.182724 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74022312-19ed-4ed7-b5d7-03a842e1de8e" (UID: "74022312-19ed-4ed7-b5d7-03a842e1de8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.203823 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-config-data" (OuterVolumeSpecName: "config-data") pod "74022312-19ed-4ed7-b5d7-03a842e1de8e" (UID: "74022312-19ed-4ed7-b5d7-03a842e1de8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.258174 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg8l4\" (UniqueName: \"kubernetes.io/projected/74022312-19ed-4ed7-b5d7-03a842e1de8e-kube-api-access-xg8l4\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.258210 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.258235 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74022312-19ed-4ed7-b5d7-03a842e1de8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.752673 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hsxh2" event={"ID":"74022312-19ed-4ed7-b5d7-03a842e1de8e","Type":"ContainerDied","Data":"d1fe0780ae10ee7748f840331f7f809d2c2b71ecb1de0f73a37bcea5c27e348d"} Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.752725 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1fe0780ae10ee7748f840331f7f809d2c2b71ecb1de0f73a37bcea5c27e348d" Nov 26 08:23:23 crc kubenswrapper[4909]: I1126 08:23:23.752746 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hsxh2" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.033304 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-62n6s"] Nov 26 08:23:24 crc kubenswrapper[4909]: E1126 08:23:24.033660 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74022312-19ed-4ed7-b5d7-03a842e1de8e" containerName="keystone-db-sync" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.033676 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="74022312-19ed-4ed7-b5d7-03a842e1de8e" containerName="keystone-db-sync" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.033831 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="74022312-19ed-4ed7-b5d7-03a842e1de8e" containerName="keystone-db-sync" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.034380 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.043097 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.043516 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.043649 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.043756 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgr6r" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.053289 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62n6s"] Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.059487 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7485969d9c-mvbhq"] Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.060799 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.072114 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-scripts\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.072161 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.072230 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-credential-keys\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.072275 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lnv\" (UniqueName: \"kubernetes.io/projected/afe5f303-c771-4f41-b9ff-9c675bbb6e81-kube-api-access-z7lnv\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.072298 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-config-data\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.072328 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-fernet-keys\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.078115 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7485969d9c-mvbhq"] Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175544 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-config\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175644 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-dns-svc\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175743 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-credential-keys\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175767 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-nb\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175825 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lnv\" (UniqueName: \"kubernetes.io/projected/afe5f303-c771-4f41-b9ff-9c675bbb6e81-kube-api-access-z7lnv\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175859 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-config-data\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175899 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-fernet-keys\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175934 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-sb\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.175959 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htd24\" (UniqueName: \"kubernetes.io/projected/1b0c9935-9165-4a32-bd48-d4be99ebccbd-kube-api-access-htd24\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.176022 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-scripts\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.176044 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.192564 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-config-data\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.193219 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-fernet-keys\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.198103 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.203069 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-credential-keys\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.210938 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-scripts\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.211734 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lnv\" (UniqueName: \"kubernetes.io/projected/afe5f303-c771-4f41-b9ff-9c675bbb6e81-kube-api-access-z7lnv\") pod \"keystone-bootstrap-62n6s\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.278128 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-sb\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.278182 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htd24\" (UniqueName: \"kubernetes.io/projected/1b0c9935-9165-4a32-bd48-d4be99ebccbd-kube-api-access-htd24\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.278245 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-config\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.278271 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-dns-svc\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.278332 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-nb\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.279332 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-nb\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.279562 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-sb\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.279929 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-config\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.287935 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-dns-svc\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.312498 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htd24\" (UniqueName: \"kubernetes.io/projected/1b0c9935-9165-4a32-bd48-d4be99ebccbd-kube-api-access-htd24\") pod \"dnsmasq-dns-7485969d9c-mvbhq\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.368082 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.382063 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.830479 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7485969d9c-mvbhq"] Nov 26 08:23:24 crc kubenswrapper[4909]: I1126 08:23:24.882526 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62n6s"] Nov 26 08:23:25 crc kubenswrapper[4909]: I1126 08:23:25.771428 4909 generic.go:334] "Generic (PLEG): container finished" podID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerID="b0ed9b92298641e29a944c730eac85c15105e6b62814b5d4f05f19e9eb0e711e" exitCode=0 Nov 26 08:23:25 crc kubenswrapper[4909]: I1126 08:23:25.771502 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" event={"ID":"1b0c9935-9165-4a32-bd48-d4be99ebccbd","Type":"ContainerDied","Data":"b0ed9b92298641e29a944c730eac85c15105e6b62814b5d4f05f19e9eb0e711e"} Nov 26 08:23:25 crc kubenswrapper[4909]: I1126 08:23:25.771933 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" event={"ID":"1b0c9935-9165-4a32-bd48-d4be99ebccbd","Type":"ContainerStarted","Data":"f858dec80bb354bc4e886135fca719af5af1c65d2786c7533a3e2ce1b64bf3ff"} Nov 26 08:23:25 crc kubenswrapper[4909]: I1126 08:23:25.773495 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62n6s" event={"ID":"afe5f303-c771-4f41-b9ff-9c675bbb6e81","Type":"ContainerStarted","Data":"3476f4a69722e91a35c14ce572ef194893b7d5c736784fbf15569b0235266687"} Nov 26 08:23:25 crc kubenswrapper[4909]: I1126 08:23:25.773526 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62n6s" event={"ID":"afe5f303-c771-4f41-b9ff-9c675bbb6e81","Type":"ContainerStarted","Data":"ecfd7e118db0688c38c2ca0282005212a5a4fb6837a31ca351c2f3974c0e5c00"} Nov 26 08:23:25 crc kubenswrapper[4909]: I1126 08:23:25.837825 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-62n6s" podStartSLOduration=1.837801266 podStartE2EDuration="1.837801266s" podCreationTimestamp="2025-11-26 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:23:25.823525187 +0000 UTC m=+4977.969736383" watchObservedRunningTime="2025-11-26 08:23:25.837801266 +0000 UTC m=+4977.984012432" Nov 26 08:23:26 crc kubenswrapper[4909]: I1126 08:23:26.785292 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" event={"ID":"1b0c9935-9165-4a32-bd48-d4be99ebccbd","Type":"ContainerStarted","Data":"6f7064a97e29ed3e4208101640c1c99e95f5bf2c3b9399fbe9983e7f26c21770"} Nov 26 08:23:26 crc kubenswrapper[4909]: I1126 08:23:26.834542 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" podStartSLOduration=2.834515351 podStartE2EDuration="2.834515351s" podCreationTimestamp="2025-11-26 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:23:26.817475467 +0000 UTC m=+4978.963686653" watchObservedRunningTime="2025-11-26 08:23:26.834515351 +0000 UTC m=+4978.980726557" Nov 26 08:23:27 crc kubenswrapper[4909]: I1126 08:23:27.792793 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:28 crc kubenswrapper[4909]: I1126 08:23:28.810149 4909 generic.go:334] "Generic (PLEG): container finished" podID="afe5f303-c771-4f41-b9ff-9c675bbb6e81" containerID="3476f4a69722e91a35c14ce572ef194893b7d5c736784fbf15569b0235266687" exitCode=0 Nov 26 08:23:28 crc kubenswrapper[4909]: I1126 08:23:28.811827 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62n6s" event={"ID":"afe5f303-c771-4f41-b9ff-9c675bbb6e81","Type":"ContainerDied","Data":"3476f4a69722e91a35c14ce572ef194893b7d5c736784fbf15569b0235266687"} Nov 26 08:23:29 crc kubenswrapper[4909]: I1126 08:23:29.499821 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:23:29 crc kubenswrapper[4909]: E1126 08:23:29.500534 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.135191 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.173899 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle\") pod \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.173950 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-config-data\") pod \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.173975 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7lnv\" (UniqueName: \"kubernetes.io/projected/afe5f303-c771-4f41-b9ff-9c675bbb6e81-kube-api-access-z7lnv\") pod \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.174004 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-fernet-keys\") pod \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.174024 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-scripts\") pod \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.174070 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-credential-keys\") pod \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.180937 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "afe5f303-c771-4f41-b9ff-9c675bbb6e81" (UID: "afe5f303-c771-4f41-b9ff-9c675bbb6e81"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.180947 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe5f303-c771-4f41-b9ff-9c675bbb6e81-kube-api-access-z7lnv" (OuterVolumeSpecName: "kube-api-access-z7lnv") pod "afe5f303-c771-4f41-b9ff-9c675bbb6e81" (UID: "afe5f303-c771-4f41-b9ff-9c675bbb6e81"). InnerVolumeSpecName "kube-api-access-z7lnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.180966 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "afe5f303-c771-4f41-b9ff-9c675bbb6e81" (UID: "afe5f303-c771-4f41-b9ff-9c675bbb6e81"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.181096 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-scripts" (OuterVolumeSpecName: "scripts") pod "afe5f303-c771-4f41-b9ff-9c675bbb6e81" (UID: "afe5f303-c771-4f41-b9ff-9c675bbb6e81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:30 crc kubenswrapper[4909]: E1126 08:23:30.198868 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle podName:afe5f303-c771-4f41-b9ff-9c675bbb6e81 nodeName:}" failed. No retries permitted until 2025-11-26 08:23:30.698836022 +0000 UTC m=+4982.845047188 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle") pod "afe5f303-c771-4f41-b9ff-9c675bbb6e81" (UID: "afe5f303-c771-4f41-b9ff-9c675bbb6e81") : error deleting /var/lib/kubelet/pods/afe5f303-c771-4f41-b9ff-9c675bbb6e81/volume-subpaths: remove /var/lib/kubelet/pods/afe5f303-c771-4f41-b9ff-9c675bbb6e81/volume-subpaths: no such file or directory Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.202181 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-config-data" (OuterVolumeSpecName: "config-data") pod "afe5f303-c771-4f41-b9ff-9c675bbb6e81" (UID: "afe5f303-c771-4f41-b9ff-9c675bbb6e81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.275716 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.275764 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7lnv\" (UniqueName: \"kubernetes.io/projected/afe5f303-c771-4f41-b9ff-9c675bbb6e81-kube-api-access-z7lnv\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.275777 4909 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.275789 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.275802 4909 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.783877 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle\") pod \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\" (UID: \"afe5f303-c771-4f41-b9ff-9c675bbb6e81\") " Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.789918 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe5f303-c771-4f41-b9ff-9c675bbb6e81" (UID: "afe5f303-c771-4f41-b9ff-9c675bbb6e81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.836356 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62n6s" event={"ID":"afe5f303-c771-4f41-b9ff-9c675bbb6e81","Type":"ContainerDied","Data":"ecfd7e118db0688c38c2ca0282005212a5a4fb6837a31ca351c2f3974c0e5c00"} Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.836407 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecfd7e118db0688c38c2ca0282005212a5a4fb6837a31ca351c2f3974c0e5c00" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.836415 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62n6s" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.886121 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe5f303-c771-4f41-b9ff-9c675bbb6e81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.914106 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-62n6s"] Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.920844 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-62n6s"] Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.965201 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hr7tj"] Nov 26 08:23:30 crc kubenswrapper[4909]: E1126 08:23:30.965529 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe5f303-c771-4f41-b9ff-9c675bbb6e81" containerName="keystone-bootstrap" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.965541 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe5f303-c771-4f41-b9ff-9c675bbb6e81" containerName="keystone-bootstrap" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.965700 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe5f303-c771-4f41-b9ff-9c675bbb6e81" containerName="keystone-bootstrap" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.966220 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.968129 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.968483 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.972817 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.973374 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgr6r" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.983439 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hr7tj"] Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.990175 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-config-data\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.990250 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-scripts\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.990354 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-fernet-keys\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.990455 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-credential-keys\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.990481 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xj6j\" (UniqueName: \"kubernetes.io/projected/d0e63ca5-fc22-4174-b9d3-bb47fa838467-kube-api-access-5xj6j\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:30 crc kubenswrapper[4909]: I1126 08:23:30.990511 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-combined-ca-bundle\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.092376 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-credential-keys\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.092427 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xj6j\" (UniqueName: \"kubernetes.io/projected/d0e63ca5-fc22-4174-b9d3-bb47fa838467-kube-api-access-5xj6j\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.092455 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-combined-ca-bundle\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.092506 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-config-data\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.092540 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-scripts\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.092585 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-fernet-keys\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.097418 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-scripts\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.097907 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-fernet-keys\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.097934 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-credential-keys\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.098172 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-config-data\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.098388 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-combined-ca-bundle\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.108807 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xj6j\" (UniqueName: \"kubernetes.io/projected/d0e63ca5-fc22-4174-b9d3-bb47fa838467-kube-api-access-5xj6j\") pod \"keystone-bootstrap-hr7tj\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.294948 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.736747 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hr7tj"] Nov 26 08:23:31 crc kubenswrapper[4909]: W1126 08:23:31.738182 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0e63ca5_fc22_4174_b9d3_bb47fa838467.slice/crio-28bbcf6bbf37b966fc08cd1485f25451e86394a76522acc131235fb3903a3cf2 WatchSource:0}: Error finding container 28bbcf6bbf37b966fc08cd1485f25451e86394a76522acc131235fb3903a3cf2: Status 404 returned error can't find the container with id 28bbcf6bbf37b966fc08cd1485f25451e86394a76522acc131235fb3903a3cf2 Nov 26 08:23:31 crc kubenswrapper[4909]: I1126 08:23:31.844332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr7tj" event={"ID":"d0e63ca5-fc22-4174-b9d3-bb47fa838467","Type":"ContainerStarted","Data":"28bbcf6bbf37b966fc08cd1485f25451e86394a76522acc131235fb3903a3cf2"} Nov 26 08:23:32 crc kubenswrapper[4909]: I1126 08:23:32.514880 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe5f303-c771-4f41-b9ff-9c675bbb6e81" path="/var/lib/kubelet/pods/afe5f303-c771-4f41-b9ff-9c675bbb6e81/volumes" Nov 26 08:23:32 crc kubenswrapper[4909]: I1126 08:23:32.859110 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr7tj" event={"ID":"d0e63ca5-fc22-4174-b9d3-bb47fa838467","Type":"ContainerStarted","Data":"1b07b68e8c6b5e4b739af19646ff3e390df5ac14b6618e916ee4799bd9f9de29"} Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.384679 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.423932 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hr7tj" podStartSLOduration=4.423906223 podStartE2EDuration="4.423906223s" podCreationTimestamp="2025-11-26 08:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:23:32.885749564 +0000 UTC m=+4985.031960820" watchObservedRunningTime="2025-11-26 08:23:34.423906223 +0000 UTC m=+4986.570117389" Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.448958 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c54468fdc-w74nb"] Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.449384 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" podUID="e49fc00a-2715-47f2-858c-606a04d262b8" containerName="dnsmasq-dns" containerID="cri-o://148390318a352fbed1919f42225d443d21316df99cd0770883277ad45ff57d53" gracePeriod=10 Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.880821 4909 generic.go:334] "Generic (PLEG): container finished" podID="e49fc00a-2715-47f2-858c-606a04d262b8" containerID="148390318a352fbed1919f42225d443d21316df99cd0770883277ad45ff57d53" exitCode=0 Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.880891 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" event={"ID":"e49fc00a-2715-47f2-858c-606a04d262b8","Type":"ContainerDied","Data":"148390318a352fbed1919f42225d443d21316df99cd0770883277ad45ff57d53"} Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.881273 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" event={"ID":"e49fc00a-2715-47f2-858c-606a04d262b8","Type":"ContainerDied","Data":"21cc6349302e37c40f09c5f1cfb07f38a94c61b171df0cfdc8f3850a4e179059"} Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.881291 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21cc6349302e37c40f09c5f1cfb07f38a94c61b171df0cfdc8f3850a4e179059" Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.882472 4909 generic.go:334] "Generic (PLEG): container finished" podID="d0e63ca5-fc22-4174-b9d3-bb47fa838467" containerID="1b07b68e8c6b5e4b739af19646ff3e390df5ac14b6618e916ee4799bd9f9de29" exitCode=0 Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.882509 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr7tj" event={"ID":"d0e63ca5-fc22-4174-b9d3-bb47fa838467","Type":"ContainerDied","Data":"1b07b68e8c6b5e4b739af19646ff3e390df5ac14b6618e916ee4799bd9f9de29"} Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.922440 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.971113 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-config\") pod \"e49fc00a-2715-47f2-858c-606a04d262b8\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.971222 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9wpm\" (UniqueName: \"kubernetes.io/projected/e49fc00a-2715-47f2-858c-606a04d262b8-kube-api-access-k9wpm\") pod \"e49fc00a-2715-47f2-858c-606a04d262b8\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.971254 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-dns-svc\") pod \"e49fc00a-2715-47f2-858c-606a04d262b8\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.971353 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-nb\") pod \"e49fc00a-2715-47f2-858c-606a04d262b8\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.971430 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-sb\") pod \"e49fc00a-2715-47f2-858c-606a04d262b8\" (UID: \"e49fc00a-2715-47f2-858c-606a04d262b8\") " Nov 26 08:23:34 crc kubenswrapper[4909]: I1126 08:23:34.978497 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49fc00a-2715-47f2-858c-606a04d262b8-kube-api-access-k9wpm" (OuterVolumeSpecName: "kube-api-access-k9wpm") pod "e49fc00a-2715-47f2-858c-606a04d262b8" (UID: "e49fc00a-2715-47f2-858c-606a04d262b8"). InnerVolumeSpecName "kube-api-access-k9wpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.009816 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e49fc00a-2715-47f2-858c-606a04d262b8" (UID: "e49fc00a-2715-47f2-858c-606a04d262b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.012905 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e49fc00a-2715-47f2-858c-606a04d262b8" (UID: "e49fc00a-2715-47f2-858c-606a04d262b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.018843 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-config" (OuterVolumeSpecName: "config") pod "e49fc00a-2715-47f2-858c-606a04d262b8" (UID: "e49fc00a-2715-47f2-858c-606a04d262b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.030471 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e49fc00a-2715-47f2-858c-606a04d262b8" (UID: "e49fc00a-2715-47f2-858c-606a04d262b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.074391 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.074435 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.074446 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.074456 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9wpm\" (UniqueName: \"kubernetes.io/projected/e49fc00a-2715-47f2-858c-606a04d262b8-kube-api-access-k9wpm\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.074465 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49fc00a-2715-47f2-858c-606a04d262b8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.891353 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c54468fdc-w74nb" Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.926999 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c54468fdc-w74nb"] Nov 26 08:23:35 crc kubenswrapper[4909]: I1126 08:23:35.933820 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c54468fdc-w74nb"] Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.245536 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.299230 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-fernet-keys\") pod \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.299330 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xj6j\" (UniqueName: \"kubernetes.io/projected/d0e63ca5-fc22-4174-b9d3-bb47fa838467-kube-api-access-5xj6j\") pod \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.299371 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-credential-keys\") pod \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.299423 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-scripts\") pod \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.299488 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-config-data\") pod \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.299548 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-combined-ca-bundle\") pod \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\" (UID: \"d0e63ca5-fc22-4174-b9d3-bb47fa838467\") " Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.303847 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d0e63ca5-fc22-4174-b9d3-bb47fa838467" (UID: "d0e63ca5-fc22-4174-b9d3-bb47fa838467"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.304210 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d0e63ca5-fc22-4174-b9d3-bb47fa838467" (UID: "d0e63ca5-fc22-4174-b9d3-bb47fa838467"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.305852 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e63ca5-fc22-4174-b9d3-bb47fa838467-kube-api-access-5xj6j" (OuterVolumeSpecName: "kube-api-access-5xj6j") pod "d0e63ca5-fc22-4174-b9d3-bb47fa838467" (UID: "d0e63ca5-fc22-4174-b9d3-bb47fa838467"). InnerVolumeSpecName "kube-api-access-5xj6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.314504 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-scripts" (OuterVolumeSpecName: "scripts") pod "d0e63ca5-fc22-4174-b9d3-bb47fa838467" (UID: "d0e63ca5-fc22-4174-b9d3-bb47fa838467"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.324363 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-config-data" (OuterVolumeSpecName: "config-data") pod "d0e63ca5-fc22-4174-b9d3-bb47fa838467" (UID: "d0e63ca5-fc22-4174-b9d3-bb47fa838467"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.327063 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0e63ca5-fc22-4174-b9d3-bb47fa838467" (UID: "d0e63ca5-fc22-4174-b9d3-bb47fa838467"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.401532 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.401577 4909 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.401588 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xj6j\" (UniqueName: \"kubernetes.io/projected/d0e63ca5-fc22-4174-b9d3-bb47fa838467-kube-api-access-5xj6j\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.401615 4909 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.401624 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.401631 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e63ca5-fc22-4174-b9d3-bb47fa838467-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.510077 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49fc00a-2715-47f2-858c-606a04d262b8" path="/var/lib/kubelet/pods/e49fc00a-2715-47f2-858c-606a04d262b8/volumes" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.901398 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr7tj" event={"ID":"d0e63ca5-fc22-4174-b9d3-bb47fa838467","Type":"ContainerDied","Data":"28bbcf6bbf37b966fc08cd1485f25451e86394a76522acc131235fb3903a3cf2"} Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.901452 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28bbcf6bbf37b966fc08cd1485f25451e86394a76522acc131235fb3903a3cf2" Nov 26 08:23:36 crc kubenswrapper[4909]: I1126 08:23:36.901469 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr7tj" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.357297 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c66bd4b5c-xlb68"] Nov 26 08:23:37 crc kubenswrapper[4909]: E1126 08:23:37.357777 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49fc00a-2715-47f2-858c-606a04d262b8" containerName="init" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.357802 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49fc00a-2715-47f2-858c-606a04d262b8" containerName="init" Nov 26 08:23:37 crc kubenswrapper[4909]: E1126 08:23:37.357826 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e63ca5-fc22-4174-b9d3-bb47fa838467" containerName="keystone-bootstrap" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.357836 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e63ca5-fc22-4174-b9d3-bb47fa838467" containerName="keystone-bootstrap" Nov 26 08:23:37 crc kubenswrapper[4909]: E1126 08:23:37.357880 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49fc00a-2715-47f2-858c-606a04d262b8" containerName="dnsmasq-dns" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.357888 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49fc00a-2715-47f2-858c-606a04d262b8" containerName="dnsmasq-dns" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.358098 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49fc00a-2715-47f2-858c-606a04d262b8" containerName="dnsmasq-dns" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.358129 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e63ca5-fc22-4174-b9d3-bb47fa838467" containerName="keystone-bootstrap" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.358963 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.367097 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c66bd4b5c-xlb68"] Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.407091 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.407452 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.407476 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.407755 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgr6r" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.434551 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-credential-keys\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.434687 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4td42\" (UniqueName: \"kubernetes.io/projected/bba4a087-0b07-4a45-b46d-989e7681e1d0-kube-api-access-4td42\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.434929 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-scripts\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.434958 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-config-data\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.435006 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-combined-ca-bundle\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.435065 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-fernet-keys\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.536872 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-scripts\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.536916 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-config-data\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.536941 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-combined-ca-bundle\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.536968 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-fernet-keys\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.537001 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-credential-keys\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.537028 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4td42\" (UniqueName: \"kubernetes.io/projected/bba4a087-0b07-4a45-b46d-989e7681e1d0-kube-api-access-4td42\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.542759 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-credential-keys\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.542866 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-fernet-keys\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.543405 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-scripts\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.543893 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-combined-ca-bundle\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.544177 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba4a087-0b07-4a45-b46d-989e7681e1d0-config-data\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.563066 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4td42\" (UniqueName: \"kubernetes.io/projected/bba4a087-0b07-4a45-b46d-989e7681e1d0-kube-api-access-4td42\") pod \"keystone-c66bd4b5c-xlb68\" (UID: \"bba4a087-0b07-4a45-b46d-989e7681e1d0\") " pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:37 crc kubenswrapper[4909]: I1126 08:23:37.737090 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:38 crc kubenswrapper[4909]: I1126 08:23:38.232384 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c66bd4b5c-xlb68"] Nov 26 08:23:38 crc kubenswrapper[4909]: W1126 08:23:38.250627 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba4a087_0b07_4a45_b46d_989e7681e1d0.slice/crio-522dc6d29c51c770cbfa8c118b45c26261b8c130e08ba9d1b69e27b8d29b9488 WatchSource:0}: Error finding container 522dc6d29c51c770cbfa8c118b45c26261b8c130e08ba9d1b69e27b8d29b9488: Status 404 returned error can't find the container with id 522dc6d29c51c770cbfa8c118b45c26261b8c130e08ba9d1b69e27b8d29b9488 Nov 26 08:23:38 crc kubenswrapper[4909]: I1126 08:23:38.924566 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c66bd4b5c-xlb68" event={"ID":"bba4a087-0b07-4a45-b46d-989e7681e1d0","Type":"ContainerStarted","Data":"0f02ee5ba9ea6432ff3c7124bbd18815371bc11a318c89a464f090328320a404"} Nov 26 08:23:38 crc kubenswrapper[4909]: I1126 08:23:38.924861 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c66bd4b5c-xlb68" event={"ID":"bba4a087-0b07-4a45-b46d-989e7681e1d0","Type":"ContainerStarted","Data":"522dc6d29c51c770cbfa8c118b45c26261b8c130e08ba9d1b69e27b8d29b9488"} Nov 26 08:23:38 crc kubenswrapper[4909]: I1126 08:23:38.924913 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:23:38 crc kubenswrapper[4909]: I1126 08:23:38.947981 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c66bd4b5c-xlb68" podStartSLOduration=1.947959845 podStartE2EDuration="1.947959845s" podCreationTimestamp="2025-11-26 08:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:23:38.945775285 +0000 UTC m=+4991.091986461" watchObservedRunningTime="2025-11-26 08:23:38.947959845 +0000 UTC m=+4991.094171021" Nov 26 08:23:43 crc kubenswrapper[4909]: I1126 08:23:43.499217 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:23:43 crc kubenswrapper[4909]: E1126 08:23:43.501026 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:23:58 crc kubenswrapper[4909]: I1126 08:23:58.505231 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:23:58 crc kubenswrapper[4909]: E1126 08:23:58.506173 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:24:09 crc kubenswrapper[4909]: I1126 08:24:09.211742 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c66bd4b5c-xlb68" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.435935 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.438727 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.440738 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qlxjn" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.442568 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.443509 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.445491 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.499107 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:24:13 crc kubenswrapper[4909]: E1126 08:24:13.499537 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.594235 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp8r8\" (UniqueName: \"kubernetes.io/projected/537df512-9370-4be2-9796-c5ed4615d017-kube-api-access-bp8r8\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.594285 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/537df512-9370-4be2-9796-c5ed4615d017-openstack-config-secret\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.594417 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/537df512-9370-4be2-9796-c5ed4615d017-openstack-config\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.696084 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/537df512-9370-4be2-9796-c5ed4615d017-openstack-config\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.696238 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp8r8\" (UniqueName: \"kubernetes.io/projected/537df512-9370-4be2-9796-c5ed4615d017-kube-api-access-bp8r8\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.696270 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/537df512-9370-4be2-9796-c5ed4615d017-openstack-config-secret\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.696896 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/537df512-9370-4be2-9796-c5ed4615d017-openstack-config\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.708614 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/537df512-9370-4be2-9796-c5ed4615d017-openstack-config-secret\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.717187 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp8r8\" (UniqueName: \"kubernetes.io/projected/537df512-9370-4be2-9796-c5ed4615d017-kube-api-access-bp8r8\") pod \"openstackclient\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " pod="openstack/openstackclient" Nov 26 08:24:13 crc kubenswrapper[4909]: I1126 08:24:13.762023 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 08:24:14 crc kubenswrapper[4909]: I1126 08:24:14.226416 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 08:24:14 crc kubenswrapper[4909]: I1126 08:24:14.244051 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"537df512-9370-4be2-9796-c5ed4615d017","Type":"ContainerStarted","Data":"e133cc4ccae0569f86e7d34ca012d487899986f15698091d7b027375688b3ab8"} Nov 26 08:24:15 crc kubenswrapper[4909]: I1126 08:24:15.251503 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"537df512-9370-4be2-9796-c5ed4615d017","Type":"ContainerStarted","Data":"8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd"} Nov 26 08:24:15 crc kubenswrapper[4909]: I1126 08:24:15.274181 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.27415909 podStartE2EDuration="2.27415909s" podCreationTimestamp="2025-11-26 08:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:24:15.266405348 +0000 UTC m=+5027.412616524" watchObservedRunningTime="2025-11-26 08:24:15.27415909 +0000 UTC m=+5027.420370256" Nov 26 08:24:25 crc kubenswrapper[4909]: I1126 08:24:25.498776 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:24:25 crc kubenswrapper[4909]: E1126 08:24:25.499528 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:24:37 crc kubenswrapper[4909]: I1126 08:24:37.530530 4909 scope.go:117] "RemoveContainer" containerID="54a072066f44826ab02982db86fa001471015a7111a0bb65717e3e06f83a8837" Nov 26 08:24:37 crc kubenswrapper[4909]: I1126 08:24:37.554177 4909 scope.go:117] "RemoveContainer" containerID="3dbe581553ce8168711fe40da33103920380d42dcf942f45d4c5f7df10acd5db" Nov 26 08:24:37 crc kubenswrapper[4909]: I1126 08:24:37.591412 4909 scope.go:117] "RemoveContainer" containerID="1adac554221a353b2c9bc846f58f079fc71f9856e5847551a7238675e8f591c3" Nov 26 08:24:37 crc kubenswrapper[4909]: I1126 08:24:37.622309 4909 scope.go:117] "RemoveContainer" containerID="73e671a5833510c819c3ab37952f95312ac41df893dcbfd4bedc45947527c74e" Nov 26 08:24:37 crc kubenswrapper[4909]: I1126 08:24:37.667820 4909 scope.go:117] "RemoveContainer" containerID="a00cc0a4ab129dbbfeaff9c9bc05ad28e52def8bb85dd77423f528a6d00b7e52" Nov 26 08:24:39 crc kubenswrapper[4909]: I1126 08:24:39.499979 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:24:39 crc kubenswrapper[4909]: E1126 08:24:39.500796 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:24:54 crc kubenswrapper[4909]: I1126 08:24:54.500947 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:24:54 crc kubenswrapper[4909]: E1126 08:24:54.501777 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:25:08 crc kubenswrapper[4909]: I1126 08:25:08.504300 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:25:08 crc kubenswrapper[4909]: E1126 08:25:08.505189 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:25:09 crc kubenswrapper[4909]: I1126 08:25:09.716429 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wl8kg"] Nov 26 08:25:09 crc kubenswrapper[4909]: I1126 08:25:09.718952 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:09 crc kubenswrapper[4909]: I1126 08:25:09.743211 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl8kg"] Nov 26 08:25:09 crc kubenswrapper[4909]: I1126 08:25:09.901753 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-utilities\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:09 crc kubenswrapper[4909]: I1126 08:25:09.902187 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-catalog-content\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:09 crc kubenswrapper[4909]: I1126 08:25:09.902252 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnz4n\" (UniqueName: \"kubernetes.io/projected/c6556ab7-af23-4490-8dc7-1601fe9254a9-kube-api-access-tnz4n\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.003792 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnz4n\" (UniqueName: \"kubernetes.io/projected/c6556ab7-af23-4490-8dc7-1601fe9254a9-kube-api-access-tnz4n\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.003879 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-utilities\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.003938 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-catalog-content\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.004341 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-catalog-content\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.004682 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-utilities\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.021142 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnz4n\" (UniqueName: \"kubernetes.io/projected/c6556ab7-af23-4490-8dc7-1601fe9254a9-kube-api-access-tnz4n\") pod \"redhat-marketplace-wl8kg\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.054925 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.531280 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl8kg"] Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.788975 4909 generic.go:334] "Generic (PLEG): container finished" podID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerID="328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06" exitCode=0 Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.789023 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl8kg" event={"ID":"c6556ab7-af23-4490-8dc7-1601fe9254a9","Type":"ContainerDied","Data":"328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06"} Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.789051 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl8kg" event={"ID":"c6556ab7-af23-4490-8dc7-1601fe9254a9","Type":"ContainerStarted","Data":"cc1b23c12939f27c15a8bbf9ba3679680464a265e47f941f1dcb99165a77b2cc"} Nov 26 08:25:10 crc kubenswrapper[4909]: I1126 08:25:10.793932 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:25:11 crc kubenswrapper[4909]: I1126 08:25:11.798202 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl8kg" event={"ID":"c6556ab7-af23-4490-8dc7-1601fe9254a9","Type":"ContainerStarted","Data":"f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62"} Nov 26 08:25:12 crc kubenswrapper[4909]: I1126 08:25:12.809735 4909 generic.go:334] "Generic (PLEG): container finished" podID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerID="f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62" exitCode=0 Nov 26 08:25:12 crc kubenswrapper[4909]: I1126 08:25:12.809843 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl8kg" event={"ID":"c6556ab7-af23-4490-8dc7-1601fe9254a9","Type":"ContainerDied","Data":"f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62"} Nov 26 08:25:13 crc kubenswrapper[4909]: I1126 08:25:13.820575 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl8kg" event={"ID":"c6556ab7-af23-4490-8dc7-1601fe9254a9","Type":"ContainerStarted","Data":"8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620"} Nov 26 08:25:13 crc kubenswrapper[4909]: I1126 08:25:13.841730 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wl8kg" podStartSLOduration=2.404697094 podStartE2EDuration="4.841712926s" podCreationTimestamp="2025-11-26 08:25:09 +0000 UTC" firstStartedPulling="2025-11-26 08:25:10.791913503 +0000 UTC m=+5082.938124699" lastFinishedPulling="2025-11-26 08:25:13.228929355 +0000 UTC m=+5085.375140531" observedRunningTime="2025-11-26 08:25:13.837999145 +0000 UTC m=+5085.984210311" watchObservedRunningTime="2025-11-26 08:25:13.841712926 +0000 UTC m=+5085.987924092" Nov 26 08:25:20 crc kubenswrapper[4909]: I1126 08:25:20.055382 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:20 crc kubenswrapper[4909]: I1126 08:25:20.055994 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:20 crc kubenswrapper[4909]: I1126 08:25:20.108475 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:20 crc kubenswrapper[4909]: I1126 08:25:20.935725 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:21 crc kubenswrapper[4909]: I1126 08:25:21.010820 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl8kg"] Nov 26 08:25:21 crc kubenswrapper[4909]: I1126 08:25:21.498863 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:25:21 crc kubenswrapper[4909]: E1126 08:25:21.499176 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:25:22 crc kubenswrapper[4909]: I1126 08:25:22.903540 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wl8kg" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="registry-server" containerID="cri-o://8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620" gracePeriod=2 Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.358806 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.439388 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnz4n\" (UniqueName: \"kubernetes.io/projected/c6556ab7-af23-4490-8dc7-1601fe9254a9-kube-api-access-tnz4n\") pod \"c6556ab7-af23-4490-8dc7-1601fe9254a9\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.439686 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-utilities\") pod \"c6556ab7-af23-4490-8dc7-1601fe9254a9\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.439880 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-catalog-content\") pod \"c6556ab7-af23-4490-8dc7-1601fe9254a9\" (UID: \"c6556ab7-af23-4490-8dc7-1601fe9254a9\") " Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.440617 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-utilities" (OuterVolumeSpecName: "utilities") pod "c6556ab7-af23-4490-8dc7-1601fe9254a9" (UID: "c6556ab7-af23-4490-8dc7-1601fe9254a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.445098 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6556ab7-af23-4490-8dc7-1601fe9254a9-kube-api-access-tnz4n" (OuterVolumeSpecName: "kube-api-access-tnz4n") pod "c6556ab7-af23-4490-8dc7-1601fe9254a9" (UID: "c6556ab7-af23-4490-8dc7-1601fe9254a9"). InnerVolumeSpecName "kube-api-access-tnz4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.456913 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6556ab7-af23-4490-8dc7-1601fe9254a9" (UID: "c6556ab7-af23-4490-8dc7-1601fe9254a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.542429 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.542460 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnz4n\" (UniqueName: \"kubernetes.io/projected/c6556ab7-af23-4490-8dc7-1601fe9254a9-kube-api-access-tnz4n\") on node \"crc\" DevicePath \"\"" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.542470 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6556ab7-af23-4490-8dc7-1601fe9254a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.918198 4909 generic.go:334] "Generic (PLEG): container finished" podID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerID="8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620" exitCode=0 Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.918272 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl8kg" event={"ID":"c6556ab7-af23-4490-8dc7-1601fe9254a9","Type":"ContainerDied","Data":"8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620"} Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.918289 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl8kg" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.918328 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl8kg" event={"ID":"c6556ab7-af23-4490-8dc7-1601fe9254a9","Type":"ContainerDied","Data":"cc1b23c12939f27c15a8bbf9ba3679680464a265e47f941f1dcb99165a77b2cc"} Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.918354 4909 scope.go:117] "RemoveContainer" containerID="8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.938037 4909 scope.go:117] "RemoveContainer" containerID="f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62" Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.953324 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl8kg"] Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.962812 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl8kg"] Nov 26 08:25:23 crc kubenswrapper[4909]: I1126 08:25:23.974905 4909 scope.go:117] "RemoveContainer" containerID="328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06" Nov 26 08:25:24 crc kubenswrapper[4909]: I1126 08:25:24.011533 4909 scope.go:117] "RemoveContainer" containerID="8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620" Nov 26 08:25:24 crc kubenswrapper[4909]: E1126 08:25:24.012181 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620\": container with ID starting with 8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620 not found: ID does not exist" containerID="8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620" Nov 26 08:25:24 crc kubenswrapper[4909]: I1126 08:25:24.012220 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620"} err="failed to get container status \"8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620\": rpc error: code = NotFound desc = could not find container \"8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620\": container with ID starting with 8ee86878a71c3b7827e168f372f59099c9d345e16f386d9317cf38e16b089620 not found: ID does not exist" Nov 26 08:25:24 crc kubenswrapper[4909]: I1126 08:25:24.012244 4909 scope.go:117] "RemoveContainer" containerID="f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62" Nov 26 08:25:24 crc kubenswrapper[4909]: E1126 08:25:24.012676 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62\": container with ID starting with f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62 not found: ID does not exist" containerID="f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62" Nov 26 08:25:24 crc kubenswrapper[4909]: I1126 08:25:24.012763 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62"} err="failed to get container status \"f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62\": rpc error: code = NotFound desc = could not find container \"f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62\": container with ID starting with f806a71aae923ca65550f7d1270b2abbbc8f61d861bca33e9dfa06ac2805dc62 not found: ID does not exist" Nov 26 08:25:24 crc kubenswrapper[4909]: I1126 08:25:24.012820 4909 scope.go:117] "RemoveContainer" containerID="328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06" Nov 26 08:25:24 crc kubenswrapper[4909]: E1126 08:25:24.013205 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06\": container with ID starting with 328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06 not found: ID does not exist" containerID="328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06" Nov 26 08:25:24 crc kubenswrapper[4909]: I1126 08:25:24.013249 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06"} err="failed to get container status \"328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06\": rpc error: code = NotFound desc = could not find container \"328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06\": container with ID starting with 328cdd43b044b3fd571551ad6fcb1303ab76bd961af51eec74f1e64f4ab31b06 not found: ID does not exist" Nov 26 08:25:24 crc kubenswrapper[4909]: I1126 08:25:24.510443 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" path="/var/lib/kubelet/pods/c6556ab7-af23-4490-8dc7-1601fe9254a9/volumes" Nov 26 08:25:36 crc kubenswrapper[4909]: I1126 08:25:36.498844 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:25:36 crc kubenswrapper[4909]: E1126 08:25:36.499540 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:25:47 crc kubenswrapper[4909]: I1126 08:25:47.498991 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:25:47 crc kubenswrapper[4909]: E1126 08:25:47.499859 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.757944 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-88nfn"] Nov 26 08:25:49 crc kubenswrapper[4909]: E1126 08:25:49.758753 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="extract-content" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.758770 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="extract-content" Nov 26 08:25:49 crc kubenswrapper[4909]: E1126 08:25:49.758800 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="extract-utilities" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.758809 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="extract-utilities" Nov 26 08:25:49 crc kubenswrapper[4909]: E1126 08:25:49.758823 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="registry-server" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.758855 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="registry-server" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.759079 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6556ab7-af23-4490-8dc7-1601fe9254a9" containerName="registry-server" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.759778 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-88nfn" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.769985 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-88nfn"] Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.869579 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4b5c\" (UniqueName: \"kubernetes.io/projected/0bb6fd40-d61c-4543-b3e3-4fb5507994eb-kube-api-access-p4b5c\") pod \"barbican-db-create-88nfn\" (UID: \"0bb6fd40-d61c-4543-b3e3-4fb5507994eb\") " pod="openstack/barbican-db-create-88nfn" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.971090 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4b5c\" (UniqueName: \"kubernetes.io/projected/0bb6fd40-d61c-4543-b3e3-4fb5507994eb-kube-api-access-p4b5c\") pod \"barbican-db-create-88nfn\" (UID: \"0bb6fd40-d61c-4543-b3e3-4fb5507994eb\") " pod="openstack/barbican-db-create-88nfn" Nov 26 08:25:49 crc kubenswrapper[4909]: I1126 08:25:49.989256 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4b5c\" (UniqueName: \"kubernetes.io/projected/0bb6fd40-d61c-4543-b3e3-4fb5507994eb-kube-api-access-p4b5c\") pod \"barbican-db-create-88nfn\" (UID: \"0bb6fd40-d61c-4543-b3e3-4fb5507994eb\") " pod="openstack/barbican-db-create-88nfn" Nov 26 08:25:50 crc kubenswrapper[4909]: I1126 08:25:50.084242 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-88nfn" Nov 26 08:25:50 crc kubenswrapper[4909]: I1126 08:25:50.606279 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-88nfn"] Nov 26 08:25:51 crc kubenswrapper[4909]: I1126 08:25:51.178377 4909 generic.go:334] "Generic (PLEG): container finished" podID="0bb6fd40-d61c-4543-b3e3-4fb5507994eb" containerID="0d337a119107f1e36ca3cb54330ad8aa552457aa2ed4a7de176794867acc79a0" exitCode=0 Nov 26 08:25:51 crc kubenswrapper[4909]: I1126 08:25:51.178646 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-88nfn" event={"ID":"0bb6fd40-d61c-4543-b3e3-4fb5507994eb","Type":"ContainerDied","Data":"0d337a119107f1e36ca3cb54330ad8aa552457aa2ed4a7de176794867acc79a0"} Nov 26 08:25:51 crc kubenswrapper[4909]: I1126 08:25:51.178674 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-88nfn" event={"ID":"0bb6fd40-d61c-4543-b3e3-4fb5507994eb","Type":"ContainerStarted","Data":"43740ce1e1c1cac94336fea79035a4097011c41f75844dfd059459630856f720"} Nov 26 08:25:52 crc kubenswrapper[4909]: I1126 08:25:52.631839 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-88nfn" Nov 26 08:25:52 crc kubenswrapper[4909]: I1126 08:25:52.753968 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4b5c\" (UniqueName: \"kubernetes.io/projected/0bb6fd40-d61c-4543-b3e3-4fb5507994eb-kube-api-access-p4b5c\") pod \"0bb6fd40-d61c-4543-b3e3-4fb5507994eb\" (UID: \"0bb6fd40-d61c-4543-b3e3-4fb5507994eb\") " Nov 26 08:25:52 crc kubenswrapper[4909]: I1126 08:25:52.759791 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb6fd40-d61c-4543-b3e3-4fb5507994eb-kube-api-access-p4b5c" (OuterVolumeSpecName: "kube-api-access-p4b5c") pod "0bb6fd40-d61c-4543-b3e3-4fb5507994eb" (UID: "0bb6fd40-d61c-4543-b3e3-4fb5507994eb"). InnerVolumeSpecName "kube-api-access-p4b5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:25:52 crc kubenswrapper[4909]: I1126 08:25:52.857191 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4b5c\" (UniqueName: \"kubernetes.io/projected/0bb6fd40-d61c-4543-b3e3-4fb5507994eb-kube-api-access-p4b5c\") on node \"crc\" DevicePath \"\"" Nov 26 08:25:53 crc kubenswrapper[4909]: I1126 08:25:53.201289 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-88nfn" event={"ID":"0bb6fd40-d61c-4543-b3e3-4fb5507994eb","Type":"ContainerDied","Data":"43740ce1e1c1cac94336fea79035a4097011c41f75844dfd059459630856f720"} Nov 26 08:25:53 crc kubenswrapper[4909]: I1126 08:25:53.201709 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43740ce1e1c1cac94336fea79035a4097011c41f75844dfd059459630856f720" Nov 26 08:25:53 crc kubenswrapper[4909]: I1126 08:25:53.201386 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-88nfn" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.716376 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d922-account-create-hcgjn"] Nov 26 08:25:59 crc kubenswrapper[4909]: E1126 08:25:59.717263 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb6fd40-d61c-4543-b3e3-4fb5507994eb" containerName="mariadb-database-create" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.717280 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb6fd40-d61c-4543-b3e3-4fb5507994eb" containerName="mariadb-database-create" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.717498 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb6fd40-d61c-4543-b3e3-4fb5507994eb" containerName="mariadb-database-create" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.718209 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d922-account-create-hcgjn" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.721109 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.729501 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d922-account-create-hcgjn"] Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.772274 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2xt7\" (UniqueName: \"kubernetes.io/projected/5d67eccc-f207-4b2f-921c-dbf28a5438a9-kube-api-access-f2xt7\") pod \"barbican-d922-account-create-hcgjn\" (UID: \"5d67eccc-f207-4b2f-921c-dbf28a5438a9\") " pod="openstack/barbican-d922-account-create-hcgjn" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.874874 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2xt7\" (UniqueName: \"kubernetes.io/projected/5d67eccc-f207-4b2f-921c-dbf28a5438a9-kube-api-access-f2xt7\") pod \"barbican-d922-account-create-hcgjn\" (UID: \"5d67eccc-f207-4b2f-921c-dbf28a5438a9\") " pod="openstack/barbican-d922-account-create-hcgjn" Nov 26 08:25:59 crc kubenswrapper[4909]: I1126 08:25:59.902762 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2xt7\" (UniqueName: \"kubernetes.io/projected/5d67eccc-f207-4b2f-921c-dbf28a5438a9-kube-api-access-f2xt7\") pod \"barbican-d922-account-create-hcgjn\" (UID: \"5d67eccc-f207-4b2f-921c-dbf28a5438a9\") " pod="openstack/barbican-d922-account-create-hcgjn" Nov 26 08:26:00 crc kubenswrapper[4909]: I1126 08:26:00.090240 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d922-account-create-hcgjn" Nov 26 08:26:00 crc kubenswrapper[4909]: I1126 08:26:00.499086 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:26:00 crc kubenswrapper[4909]: E1126 08:26:00.499736 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:26:00 crc kubenswrapper[4909]: I1126 08:26:00.592952 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d922-account-create-hcgjn"] Nov 26 08:26:00 crc kubenswrapper[4909]: W1126 08:26:00.595925 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d67eccc_f207_4b2f_921c_dbf28a5438a9.slice/crio-f5413f01bce4807a47c42683db3dc28f1e8005a0bd9ba8965d8c62cec95f407f WatchSource:0}: Error finding container f5413f01bce4807a47c42683db3dc28f1e8005a0bd9ba8965d8c62cec95f407f: Status 404 returned error can't find the container with id f5413f01bce4807a47c42683db3dc28f1e8005a0bd9ba8965d8c62cec95f407f Nov 26 08:26:01 crc kubenswrapper[4909]: I1126 08:26:01.279856 4909 generic.go:334] "Generic (PLEG): container finished" podID="5d67eccc-f207-4b2f-921c-dbf28a5438a9" containerID="9c33b6d857a92f02b22caf20289afe3dddcd65b2a24aa320d70df36f1eadd2c3" exitCode=0 Nov 26 08:26:01 crc kubenswrapper[4909]: I1126 08:26:01.279935 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d922-account-create-hcgjn" event={"ID":"5d67eccc-f207-4b2f-921c-dbf28a5438a9","Type":"ContainerDied","Data":"9c33b6d857a92f02b22caf20289afe3dddcd65b2a24aa320d70df36f1eadd2c3"} Nov 26 08:26:01 crc kubenswrapper[4909]: I1126 08:26:01.280244 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d922-account-create-hcgjn" event={"ID":"5d67eccc-f207-4b2f-921c-dbf28a5438a9","Type":"ContainerStarted","Data":"f5413f01bce4807a47c42683db3dc28f1e8005a0bd9ba8965d8c62cec95f407f"} Nov 26 08:26:02 crc kubenswrapper[4909]: I1126 08:26:02.612290 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d922-account-create-hcgjn" Nov 26 08:26:02 crc kubenswrapper[4909]: I1126 08:26:02.726723 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2xt7\" (UniqueName: \"kubernetes.io/projected/5d67eccc-f207-4b2f-921c-dbf28a5438a9-kube-api-access-f2xt7\") pod \"5d67eccc-f207-4b2f-921c-dbf28a5438a9\" (UID: \"5d67eccc-f207-4b2f-921c-dbf28a5438a9\") " Nov 26 08:26:02 crc kubenswrapper[4909]: I1126 08:26:02.736101 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d67eccc-f207-4b2f-921c-dbf28a5438a9-kube-api-access-f2xt7" (OuterVolumeSpecName: "kube-api-access-f2xt7") pod "5d67eccc-f207-4b2f-921c-dbf28a5438a9" (UID: "5d67eccc-f207-4b2f-921c-dbf28a5438a9"). InnerVolumeSpecName "kube-api-access-f2xt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:26:02 crc kubenswrapper[4909]: I1126 08:26:02.828485 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2xt7\" (UniqueName: \"kubernetes.io/projected/5d67eccc-f207-4b2f-921c-dbf28a5438a9-kube-api-access-f2xt7\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:03 crc kubenswrapper[4909]: I1126 08:26:03.301372 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d922-account-create-hcgjn" event={"ID":"5d67eccc-f207-4b2f-921c-dbf28a5438a9","Type":"ContainerDied","Data":"f5413f01bce4807a47c42683db3dc28f1e8005a0bd9ba8965d8c62cec95f407f"} Nov 26 08:26:03 crc kubenswrapper[4909]: I1126 08:26:03.301739 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5413f01bce4807a47c42683db3dc28f1e8005a0bd9ba8965d8c62cec95f407f" Nov 26 08:26:03 crc kubenswrapper[4909]: I1126 08:26:03.301479 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d922-account-create-hcgjn" Nov 26 08:26:04 crc kubenswrapper[4909]: I1126 08:26:04.882043 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g6wdv"] Nov 26 08:26:04 crc kubenswrapper[4909]: E1126 08:26:04.882448 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67eccc-f207-4b2f-921c-dbf28a5438a9" containerName="mariadb-account-create" Nov 26 08:26:04 crc kubenswrapper[4909]: I1126 08:26:04.882463 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67eccc-f207-4b2f-921c-dbf28a5438a9" containerName="mariadb-account-create" Nov 26 08:26:04 crc kubenswrapper[4909]: I1126 08:26:04.882680 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d67eccc-f207-4b2f-921c-dbf28a5438a9" containerName="mariadb-account-create" Nov 26 08:26:04 crc kubenswrapper[4909]: I1126 08:26:04.883382 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:04 crc kubenswrapper[4909]: I1126 08:26:04.886714 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lktdx" Nov 26 08:26:04 crc kubenswrapper[4909]: I1126 08:26:04.886977 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 26 08:26:04 crc kubenswrapper[4909]: I1126 08:26:04.891474 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6wdv"] Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.064774 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-db-sync-config-data\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.064907 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jscsv\" (UniqueName: \"kubernetes.io/projected/0a1e5822-f048-404a-ae86-c9ad6248f715-kube-api-access-jscsv\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.064963 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-combined-ca-bundle\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.166098 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-db-sync-config-data\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.166207 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jscsv\" (UniqueName: \"kubernetes.io/projected/0a1e5822-f048-404a-ae86-c9ad6248f715-kube-api-access-jscsv\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.166242 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-combined-ca-bundle\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.171202 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-db-sync-config-data\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.171210 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-combined-ca-bundle\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.186084 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jscsv\" (UniqueName: \"kubernetes.io/projected/0a1e5822-f048-404a-ae86-c9ad6248f715-kube-api-access-jscsv\") pod \"barbican-db-sync-g6wdv\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.201402 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:05 crc kubenswrapper[4909]: I1126 08:26:05.661902 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6wdv"] Nov 26 08:26:05 crc kubenswrapper[4909]: W1126 08:26:05.680347 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a1e5822_f048_404a_ae86_c9ad6248f715.slice/crio-0315f734de1bca01533380cf6392fdfbc427f40cd721ba7f71f0d82d466ccf9a WatchSource:0}: Error finding container 0315f734de1bca01533380cf6392fdfbc427f40cd721ba7f71f0d82d466ccf9a: Status 404 returned error can't find the container with id 0315f734de1bca01533380cf6392fdfbc427f40cd721ba7f71f0d82d466ccf9a Nov 26 08:26:06 crc kubenswrapper[4909]: I1126 08:26:06.355486 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6wdv" event={"ID":"0a1e5822-f048-404a-ae86-c9ad6248f715","Type":"ContainerStarted","Data":"2e13ca9a6527d296d8abe25a43a72c7f7533c73dd63d17f8d4d0b4e3847efcca"} Nov 26 08:26:06 crc kubenswrapper[4909]: I1126 08:26:06.355885 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6wdv" event={"ID":"0a1e5822-f048-404a-ae86-c9ad6248f715","Type":"ContainerStarted","Data":"0315f734de1bca01533380cf6392fdfbc427f40cd721ba7f71f0d82d466ccf9a"} Nov 26 08:26:06 crc kubenswrapper[4909]: I1126 08:26:06.378192 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g6wdv" podStartSLOduration=2.378173096 podStartE2EDuration="2.378173096s" podCreationTimestamp="2025-11-26 08:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:26:06.370759095 +0000 UTC m=+5138.516970261" watchObservedRunningTime="2025-11-26 08:26:06.378173096 +0000 UTC m=+5138.524384262" Nov 26 08:26:07 crc kubenswrapper[4909]: I1126 08:26:07.366090 4909 generic.go:334] "Generic (PLEG): container finished" podID="0a1e5822-f048-404a-ae86-c9ad6248f715" containerID="2e13ca9a6527d296d8abe25a43a72c7f7533c73dd63d17f8d4d0b4e3847efcca" exitCode=0 Nov 26 08:26:07 crc kubenswrapper[4909]: I1126 08:26:07.366143 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6wdv" event={"ID":"0a1e5822-f048-404a-ae86-c9ad6248f715","Type":"ContainerDied","Data":"2e13ca9a6527d296d8abe25a43a72c7f7533c73dd63d17f8d4d0b4e3847efcca"} Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.729040 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.827170 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jscsv\" (UniqueName: \"kubernetes.io/projected/0a1e5822-f048-404a-ae86-c9ad6248f715-kube-api-access-jscsv\") pod \"0a1e5822-f048-404a-ae86-c9ad6248f715\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.827333 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-combined-ca-bundle\") pod \"0a1e5822-f048-404a-ae86-c9ad6248f715\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.827377 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-db-sync-config-data\") pod \"0a1e5822-f048-404a-ae86-c9ad6248f715\" (UID: \"0a1e5822-f048-404a-ae86-c9ad6248f715\") " Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.833767 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1e5822-f048-404a-ae86-c9ad6248f715-kube-api-access-jscsv" (OuterVolumeSpecName: "kube-api-access-jscsv") pod "0a1e5822-f048-404a-ae86-c9ad6248f715" (UID: "0a1e5822-f048-404a-ae86-c9ad6248f715"). InnerVolumeSpecName "kube-api-access-jscsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.833833 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0a1e5822-f048-404a-ae86-c9ad6248f715" (UID: "0a1e5822-f048-404a-ae86-c9ad6248f715"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.854070 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a1e5822-f048-404a-ae86-c9ad6248f715" (UID: "0a1e5822-f048-404a-ae86-c9ad6248f715"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.930085 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jscsv\" (UniqueName: \"kubernetes.io/projected/0a1e5822-f048-404a-ae86-c9ad6248f715-kube-api-access-jscsv\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.930150 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:08 crc kubenswrapper[4909]: I1126 08:26:08.930178 4909 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a1e5822-f048-404a-ae86-c9ad6248f715-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.387838 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6wdv" event={"ID":"0a1e5822-f048-404a-ae86-c9ad6248f715","Type":"ContainerDied","Data":"0315f734de1bca01533380cf6392fdfbc427f40cd721ba7f71f0d82d466ccf9a"} Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.387953 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0315f734de1bca01533380cf6392fdfbc427f40cd721ba7f71f0d82d466ccf9a" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.387909 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6wdv" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.640785 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-668bb44595-lkzgp"] Nov 26 08:26:09 crc kubenswrapper[4909]: E1126 08:26:09.641135 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1e5822-f048-404a-ae86-c9ad6248f715" containerName="barbican-db-sync" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.641154 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1e5822-f048-404a-ae86-c9ad6248f715" containerName="barbican-db-sync" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.641339 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1e5822-f048-404a-ae86-c9ad6248f715" containerName="barbican-db-sync" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.642269 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.650162 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.650418 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.650660 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lktdx" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.700301 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-66ffdb4466-s6kpl"] Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.702034 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.706519 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.717705 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66ffdb4466-s6kpl"] Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.728699 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-668bb44595-lkzgp"] Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.741651 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869545f9c9-j228l"] Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.743536 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.748711 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-config-data-custom\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.748780 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-combined-ca-bundle\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.748837 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z6h8\" (UniqueName: \"kubernetes.io/projected/aabbbfa5-7718-49d1-82ae-7b79cd170efb-kube-api-access-5z6h8\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.748861 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabbbfa5-7718-49d1-82ae-7b79cd170efb-logs\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.748903 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-config-data\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.780220 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869545f9c9-j228l"] Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850313 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-config\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850366 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-sb\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850392 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4r6\" (UniqueName: \"kubernetes.io/projected/b903a0f7-c1a1-43fb-abb8-bb7d83239317-kube-api-access-xv4r6\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850689 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-config-data-custom\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850821 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-combined-ca-bundle\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850870 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-nb\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850960 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z6h8\" (UniqueName: \"kubernetes.io/projected/aabbbfa5-7718-49d1-82ae-7b79cd170efb-kube-api-access-5z6h8\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.850988 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabbbfa5-7718-49d1-82ae-7b79cd170efb-logs\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851017 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-combined-ca-bundle\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851050 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-config-data\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851084 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwssj\" (UniqueName: \"kubernetes.io/projected/712a969d-2f65-4c0b-8550-913402cdee55-kube-api-access-fwssj\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851110 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-dns-svc\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851133 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-config-data-custom\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851158 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-config-data\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851183 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b903a0f7-c1a1-43fb-abb8-bb7d83239317-logs\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.851554 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabbbfa5-7718-49d1-82ae-7b79cd170efb-logs\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.855031 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-config-data-custom\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.855805 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-combined-ca-bundle\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.870289 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z6h8\" (UniqueName: \"kubernetes.io/projected/aabbbfa5-7718-49d1-82ae-7b79cd170efb-kube-api-access-5z6h8\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.879145 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabbbfa5-7718-49d1-82ae-7b79cd170efb-config-data\") pod \"barbican-worker-668bb44595-lkzgp\" (UID: \"aabbbfa5-7718-49d1-82ae-7b79cd170efb\") " pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.918508 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-ff6d88966-pkkdc"] Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.920136 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.922126 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.933515 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ff6d88966-pkkdc"] Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.952885 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-nb\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.952956 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-combined-ca-bundle\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.952977 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-config-data\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.953003 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwssj\" (UniqueName: \"kubernetes.io/projected/712a969d-2f65-4c0b-8550-913402cdee55-kube-api-access-fwssj\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.953022 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-dns-svc\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.953039 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-config-data-custom\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.953065 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b903a0f7-c1a1-43fb-abb8-bb7d83239317-logs\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.953113 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-config\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.953136 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-sb\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.953153 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4r6\" (UniqueName: \"kubernetes.io/projected/b903a0f7-c1a1-43fb-abb8-bb7d83239317-kube-api-access-xv4r6\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.954331 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-nb\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.954779 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-dns-svc\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.955564 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-config\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.955949 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b903a0f7-c1a1-43fb-abb8-bb7d83239317-logs\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.956270 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-sb\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.958000 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-config-data\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.962780 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-combined-ca-bundle\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.966161 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-668bb44595-lkzgp" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.973205 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4r6\" (UniqueName: \"kubernetes.io/projected/b903a0f7-c1a1-43fb-abb8-bb7d83239317-kube-api-access-xv4r6\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.973377 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwssj\" (UniqueName: \"kubernetes.io/projected/712a969d-2f65-4c0b-8550-913402cdee55-kube-api-access-fwssj\") pod \"dnsmasq-dns-869545f9c9-j228l\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:09 crc kubenswrapper[4909]: I1126 08:26:09.977530 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b903a0f7-c1a1-43fb-abb8-bb7d83239317-config-data-custom\") pod \"barbican-keystone-listener-66ffdb4466-s6kpl\" (UID: \"b903a0f7-c1a1-43fb-abb8-bb7d83239317\") " pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.033528 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.055334 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2258ed3-c9bd-4150-a1fb-f26c31771be2-logs\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.055497 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-combined-ca-bundle\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.055584 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-config-data\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.055696 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-config-data-custom\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.055729 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhr6\" (UniqueName: \"kubernetes.io/projected/e2258ed3-c9bd-4150-a1fb-f26c31771be2-kube-api-access-kfhr6\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.084049 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.158290 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-config-data-custom\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.158336 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhr6\" (UniqueName: \"kubernetes.io/projected/e2258ed3-c9bd-4150-a1fb-f26c31771be2-kube-api-access-kfhr6\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.158370 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2258ed3-c9bd-4150-a1fb-f26c31771be2-logs\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.158438 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-combined-ca-bundle\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.158480 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-config-data\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.160062 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2258ed3-c9bd-4150-a1fb-f26c31771be2-logs\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.178473 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-combined-ca-bundle\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.179136 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-config-data\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.182367 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2258ed3-c9bd-4150-a1fb-f26c31771be2-config-data-custom\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.187128 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhr6\" (UniqueName: \"kubernetes.io/projected/e2258ed3-c9bd-4150-a1fb-f26c31771be2-kube-api-access-kfhr6\") pod \"barbican-api-ff6d88966-pkkdc\" (UID: \"e2258ed3-c9bd-4150-a1fb-f26c31771be2\") " pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.334220 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:10 crc kubenswrapper[4909]: W1126 08:26:10.505927 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaabbbfa5_7718_49d1_82ae_7b79cd170efb.slice/crio-c8426906826ae44536918d1710c271b33b767732c3f5d06ef4b7120296efb5f5 WatchSource:0}: Error finding container c8426906826ae44536918d1710c271b33b767732c3f5d06ef4b7120296efb5f5: Status 404 returned error can't find the container with id c8426906826ae44536918d1710c271b33b767732c3f5d06ef4b7120296efb5f5 Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.511093 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-668bb44595-lkzgp"] Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.643393 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66ffdb4466-s6kpl"] Nov 26 08:26:10 crc kubenswrapper[4909]: W1126 08:26:10.644958 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb903a0f7_c1a1_43fb_abb8_bb7d83239317.slice/crio-1e1e7a71c95dda5d1f12828436aa4d2bd3baea6eaebf5d6c90e0d9825195de19 WatchSource:0}: Error finding container 1e1e7a71c95dda5d1f12828436aa4d2bd3baea6eaebf5d6c90e0d9825195de19: Status 404 returned error can't find the container with id 1e1e7a71c95dda5d1f12828436aa4d2bd3baea6eaebf5d6c90e0d9825195de19 Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.715947 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869545f9c9-j228l"] Nov 26 08:26:10 crc kubenswrapper[4909]: W1126 08:26:10.730936 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod712a969d_2f65_4c0b_8550_913402cdee55.slice/crio-a0f26cffd3c192db84a8d9adeae245f660a2157bdfde558325758662db4b1cab WatchSource:0}: Error finding container a0f26cffd3c192db84a8d9adeae245f660a2157bdfde558325758662db4b1cab: Status 404 returned error can't find the container with id a0f26cffd3c192db84a8d9adeae245f660a2157bdfde558325758662db4b1cab Nov 26 08:26:10 crc kubenswrapper[4909]: I1126 08:26:10.825473 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ff6d88966-pkkdc"] Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.404115 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" event={"ID":"b903a0f7-c1a1-43fb-abb8-bb7d83239317","Type":"ContainerStarted","Data":"7c68dcb7a9801a85a6074a4cb7b48feb163a4c2c0982f97dc123b7155e5d7b1a"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.404408 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" event={"ID":"b903a0f7-c1a1-43fb-abb8-bb7d83239317","Type":"ContainerStarted","Data":"1769bbd51be9c77d2bcb0e9e9ab96982c7df92d3318b33e03ea6743f94b748e4"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.404419 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" event={"ID":"b903a0f7-c1a1-43fb-abb8-bb7d83239317","Type":"ContainerStarted","Data":"1e1e7a71c95dda5d1f12828436aa4d2bd3baea6eaebf5d6c90e0d9825195de19"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.406794 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668bb44595-lkzgp" event={"ID":"aabbbfa5-7718-49d1-82ae-7b79cd170efb","Type":"ContainerStarted","Data":"cb964d7aa2f465705bf3b2723883ccdefe2f5bdc67c3c1d08f38c2ede4390fc6"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.406849 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668bb44595-lkzgp" event={"ID":"aabbbfa5-7718-49d1-82ae-7b79cd170efb","Type":"ContainerStarted","Data":"022dcbb6fe9f43436014f671ecccb8e0b9424ae6820828dbde0f4ae2674e3b0d"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.406863 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668bb44595-lkzgp" event={"ID":"aabbbfa5-7718-49d1-82ae-7b79cd170efb","Type":"ContainerStarted","Data":"c8426906826ae44536918d1710c271b33b767732c3f5d06ef4b7120296efb5f5"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.408623 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ff6d88966-pkkdc" event={"ID":"e2258ed3-c9bd-4150-a1fb-f26c31771be2","Type":"ContainerStarted","Data":"fec6d54f53a8354447b5d4cb95f642c2550376671b0a1a2653f7a4ca23b7eecc"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.408667 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ff6d88966-pkkdc" event={"ID":"e2258ed3-c9bd-4150-a1fb-f26c31771be2","Type":"ContainerStarted","Data":"639c555ea810f17dfd06a8df7d2702bb9fcb3a48524c0e9f28275392ed4b79c5"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.408680 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ff6d88966-pkkdc" event={"ID":"e2258ed3-c9bd-4150-a1fb-f26c31771be2","Type":"ContainerStarted","Data":"2c0467833f1fb8d4e9e3c354f5a970c2ff5aade013bfffe84215f5cd64aca517"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.408719 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.408733 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.410226 4909 generic.go:334] "Generic (PLEG): container finished" podID="712a969d-2f65-4c0b-8550-913402cdee55" containerID="b1e21dfb82bce7be98f25b08d6064496858efad056f98ea1bb5345f43b8c1d62" exitCode=0 Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.410263 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869545f9c9-j228l" event={"ID":"712a969d-2f65-4c0b-8550-913402cdee55","Type":"ContainerDied","Data":"b1e21dfb82bce7be98f25b08d6064496858efad056f98ea1bb5345f43b8c1d62"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.410283 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869545f9c9-j228l" event={"ID":"712a969d-2f65-4c0b-8550-913402cdee55","Type":"ContainerStarted","Data":"a0f26cffd3c192db84a8d9adeae245f660a2157bdfde558325758662db4b1cab"} Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.413391 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hr6ck"] Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.415120 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.431456 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hr6ck"] Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.442772 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-66ffdb4466-s6kpl" podStartSLOduration=2.442753193 podStartE2EDuration="2.442753193s" podCreationTimestamp="2025-11-26 08:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:26:11.432040631 +0000 UTC m=+5143.578251807" watchObservedRunningTime="2025-11-26 08:26:11.442753193 +0000 UTC m=+5143.588964349" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.513106 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-ff6d88966-pkkdc" podStartSLOduration=2.513086414 podStartE2EDuration="2.513086414s" podCreationTimestamp="2025-11-26 08:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:26:11.511020747 +0000 UTC m=+5143.657231923" watchObservedRunningTime="2025-11-26 08:26:11.513086414 +0000 UTC m=+5143.659297580" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.536051 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-668bb44595-lkzgp" podStartSLOduration=2.53602424 podStartE2EDuration="2.53602424s" podCreationTimestamp="2025-11-26 08:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:26:11.528970038 +0000 UTC m=+5143.675181204" watchObservedRunningTime="2025-11-26 08:26:11.53602424 +0000 UTC m=+5143.682235406" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.590707 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-catalog-content\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.590790 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh6tx\" (UniqueName: \"kubernetes.io/projected/a7d88038-4a61-4f19-9054-4d018773f23b-kube-api-access-gh6tx\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.591134 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-utilities\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.692441 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-utilities\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.692510 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-catalog-content\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.692538 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh6tx\" (UniqueName: \"kubernetes.io/projected/a7d88038-4a61-4f19-9054-4d018773f23b-kube-api-access-gh6tx\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.694071 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-utilities\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.696136 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-catalog-content\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.725735 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh6tx\" (UniqueName: \"kubernetes.io/projected/a7d88038-4a61-4f19-9054-4d018773f23b-kube-api-access-gh6tx\") pod \"community-operators-hr6ck\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:11 crc kubenswrapper[4909]: I1126 08:26:11.758112 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:12 crc kubenswrapper[4909]: I1126 08:26:12.319895 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hr6ck"] Nov 26 08:26:12 crc kubenswrapper[4909]: W1126 08:26:12.328238 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7d88038_4a61_4f19_9054_4d018773f23b.slice/crio-9b4286af7542d2028ba8c70831a4cc3e9211ddc789e6c82127de7b3922bf69fc WatchSource:0}: Error finding container 9b4286af7542d2028ba8c70831a4cc3e9211ddc789e6c82127de7b3922bf69fc: Status 404 returned error can't find the container with id 9b4286af7542d2028ba8c70831a4cc3e9211ddc789e6c82127de7b3922bf69fc Nov 26 08:26:12 crc kubenswrapper[4909]: I1126 08:26:12.431059 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hr6ck" event={"ID":"a7d88038-4a61-4f19-9054-4d018773f23b","Type":"ContainerStarted","Data":"9b4286af7542d2028ba8c70831a4cc3e9211ddc789e6c82127de7b3922bf69fc"} Nov 26 08:26:12 crc kubenswrapper[4909]: I1126 08:26:12.434701 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869545f9c9-j228l" event={"ID":"712a969d-2f65-4c0b-8550-913402cdee55","Type":"ContainerStarted","Data":"19a5d6088d90508de72937a33388b08098db136345f6ccd9b15511bea65f4d5a"} Nov 26 08:26:12 crc kubenswrapper[4909]: I1126 08:26:12.456282 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869545f9c9-j228l" podStartSLOduration=3.456264787 podStartE2EDuration="3.456264787s" podCreationTimestamp="2025-11-26 08:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:26:12.451199109 +0000 UTC m=+5144.597410275" watchObservedRunningTime="2025-11-26 08:26:12.456264787 +0000 UTC m=+5144.602475953" Nov 26 08:26:13 crc kubenswrapper[4909]: I1126 08:26:13.447109 4909 generic.go:334] "Generic (PLEG): container finished" podID="a7d88038-4a61-4f19-9054-4d018773f23b" containerID="0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3" exitCode=0 Nov 26 08:26:13 crc kubenswrapper[4909]: I1126 08:26:13.447203 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hr6ck" event={"ID":"a7d88038-4a61-4f19-9054-4d018773f23b","Type":"ContainerDied","Data":"0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3"} Nov 26 08:26:13 crc kubenswrapper[4909]: I1126 08:26:13.447776 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:15 crc kubenswrapper[4909]: I1126 08:26:15.475280 4909 generic.go:334] "Generic (PLEG): container finished" podID="a7d88038-4a61-4f19-9054-4d018773f23b" containerID="c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03" exitCode=0 Nov 26 08:26:15 crc kubenswrapper[4909]: I1126 08:26:15.475397 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hr6ck" event={"ID":"a7d88038-4a61-4f19-9054-4d018773f23b","Type":"ContainerDied","Data":"c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03"} Nov 26 08:26:15 crc kubenswrapper[4909]: I1126 08:26:15.499171 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:26:15 crc kubenswrapper[4909]: E1126 08:26:15.499853 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:26:16 crc kubenswrapper[4909]: I1126 08:26:16.485831 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hr6ck" event={"ID":"a7d88038-4a61-4f19-9054-4d018773f23b","Type":"ContainerStarted","Data":"c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8"} Nov 26 08:26:16 crc kubenswrapper[4909]: I1126 08:26:16.514791 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hr6ck" podStartSLOduration=2.902546798 podStartE2EDuration="5.514768604s" podCreationTimestamp="2025-11-26 08:26:11 +0000 UTC" firstStartedPulling="2025-11-26 08:26:13.451014458 +0000 UTC m=+5145.597225624" lastFinishedPulling="2025-11-26 08:26:16.063236264 +0000 UTC m=+5148.209447430" observedRunningTime="2025-11-26 08:26:16.510580569 +0000 UTC m=+5148.656791735" watchObservedRunningTime="2025-11-26 08:26:16.514768604 +0000 UTC m=+5148.660979770" Nov 26 08:26:20 crc kubenswrapper[4909]: I1126 08:26:20.085828 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:26:20 crc kubenswrapper[4909]: I1126 08:26:20.161913 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7485969d9c-mvbhq"] Nov 26 08:26:20 crc kubenswrapper[4909]: I1126 08:26:20.162150 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" podUID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerName="dnsmasq-dns" containerID="cri-o://6f7064a97e29ed3e4208101640c1c99e95f5bf2c3b9399fbe9983e7f26c21770" gracePeriod=10 Nov 26 08:26:20 crc kubenswrapper[4909]: I1126 08:26:20.529861 4909 generic.go:334] "Generic (PLEG): container finished" podID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerID="6f7064a97e29ed3e4208101640c1c99e95f5bf2c3b9399fbe9983e7f26c21770" exitCode=0 Nov 26 08:26:20 crc kubenswrapper[4909]: I1126 08:26:20.529940 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" event={"ID":"1b0c9935-9165-4a32-bd48-d4be99ebccbd","Type":"ContainerDied","Data":"6f7064a97e29ed3e4208101640c1c99e95f5bf2c3b9399fbe9983e7f26c21770"} Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.150934 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.266851 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htd24\" (UniqueName: \"kubernetes.io/projected/1b0c9935-9165-4a32-bd48-d4be99ebccbd-kube-api-access-htd24\") pod \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.266936 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-dns-svc\") pod \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.266980 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-sb\") pod \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.267014 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-config\") pod \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.267048 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-nb\") pod \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\" (UID: \"1b0c9935-9165-4a32-bd48-d4be99ebccbd\") " Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.272922 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0c9935-9165-4a32-bd48-d4be99ebccbd-kube-api-access-htd24" (OuterVolumeSpecName: "kube-api-access-htd24") pod "1b0c9935-9165-4a32-bd48-d4be99ebccbd" (UID: "1b0c9935-9165-4a32-bd48-d4be99ebccbd"). InnerVolumeSpecName "kube-api-access-htd24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.309947 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-config" (OuterVolumeSpecName: "config") pod "1b0c9935-9165-4a32-bd48-d4be99ebccbd" (UID: "1b0c9935-9165-4a32-bd48-d4be99ebccbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.316112 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b0c9935-9165-4a32-bd48-d4be99ebccbd" (UID: "1b0c9935-9165-4a32-bd48-d4be99ebccbd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.316554 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b0c9935-9165-4a32-bd48-d4be99ebccbd" (UID: "1b0c9935-9165-4a32-bd48-d4be99ebccbd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.319839 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b0c9935-9165-4a32-bd48-d4be99ebccbd" (UID: "1b0c9935-9165-4a32-bd48-d4be99ebccbd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.369805 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htd24\" (UniqueName: \"kubernetes.io/projected/1b0c9935-9165-4a32-bd48-d4be99ebccbd-kube-api-access-htd24\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.369891 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.369911 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.369926 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.369975 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b0c9935-9165-4a32-bd48-d4be99ebccbd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.538139 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" event={"ID":"1b0c9935-9165-4a32-bd48-d4be99ebccbd","Type":"ContainerDied","Data":"f858dec80bb354bc4e886135fca719af5af1c65d2786c7533a3e2ce1b64bf3ff"} Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.538192 4909 scope.go:117] "RemoveContainer" containerID="6f7064a97e29ed3e4208101640c1c99e95f5bf2c3b9399fbe9983e7f26c21770" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.538317 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7485969d9c-mvbhq" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.569012 4909 scope.go:117] "RemoveContainer" containerID="b0ed9b92298641e29a944c730eac85c15105e6b62814b5d4f05f19e9eb0e711e" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.574854 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7485969d9c-mvbhq"] Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.584763 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7485969d9c-mvbhq"] Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.653304 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.729903 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ff6d88966-pkkdc" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.762296 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.763143 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:21 crc kubenswrapper[4909]: I1126 08:26:21.825035 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:22 crc kubenswrapper[4909]: I1126 08:26:22.507495 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" path="/var/lib/kubelet/pods/1b0c9935-9165-4a32-bd48-d4be99ebccbd/volumes" Nov 26 08:26:22 crc kubenswrapper[4909]: I1126 08:26:22.613529 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:22 crc kubenswrapper[4909]: I1126 08:26:22.667958 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hr6ck"] Nov 26 08:26:24 crc kubenswrapper[4909]: I1126 08:26:24.570428 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hr6ck" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="registry-server" containerID="cri-o://c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8" gracePeriod=2 Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.043258 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.139482 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-utilities\") pod \"a7d88038-4a61-4f19-9054-4d018773f23b\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.140415 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-catalog-content\") pod \"a7d88038-4a61-4f19-9054-4d018773f23b\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.140488 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh6tx\" (UniqueName: \"kubernetes.io/projected/a7d88038-4a61-4f19-9054-4d018773f23b-kube-api-access-gh6tx\") pod \"a7d88038-4a61-4f19-9054-4d018773f23b\" (UID: \"a7d88038-4a61-4f19-9054-4d018773f23b\") " Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.140818 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-utilities" (OuterVolumeSpecName: "utilities") pod "a7d88038-4a61-4f19-9054-4d018773f23b" (UID: "a7d88038-4a61-4f19-9054-4d018773f23b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.143089 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.154161 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d88038-4a61-4f19-9054-4d018773f23b-kube-api-access-gh6tx" (OuterVolumeSpecName: "kube-api-access-gh6tx") pod "a7d88038-4a61-4f19-9054-4d018773f23b" (UID: "a7d88038-4a61-4f19-9054-4d018773f23b"). InnerVolumeSpecName "kube-api-access-gh6tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.245128 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh6tx\" (UniqueName: \"kubernetes.io/projected/a7d88038-4a61-4f19-9054-4d018773f23b-kube-api-access-gh6tx\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.476227 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7d88038-4a61-4f19-9054-4d018773f23b" (UID: "a7d88038-4a61-4f19-9054-4d018773f23b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.549875 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d88038-4a61-4f19-9054-4d018773f23b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.586777 4909 generic.go:334] "Generic (PLEG): container finished" podID="a7d88038-4a61-4f19-9054-4d018773f23b" containerID="c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8" exitCode=0 Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.586835 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hr6ck" event={"ID":"a7d88038-4a61-4f19-9054-4d018773f23b","Type":"ContainerDied","Data":"c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8"} Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.586883 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hr6ck" event={"ID":"a7d88038-4a61-4f19-9054-4d018773f23b","Type":"ContainerDied","Data":"9b4286af7542d2028ba8c70831a4cc3e9211ddc789e6c82127de7b3922bf69fc"} Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.586913 4909 scope.go:117] "RemoveContainer" containerID="c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.586981 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hr6ck" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.631927 4909 scope.go:117] "RemoveContainer" containerID="c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.650166 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hr6ck"] Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.670934 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hr6ck"] Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.682485 4909 scope.go:117] "RemoveContainer" containerID="0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.722046 4909 scope.go:117] "RemoveContainer" containerID="c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8" Nov 26 08:26:25 crc kubenswrapper[4909]: E1126 08:26:25.722957 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8\": container with ID starting with c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8 not found: ID does not exist" containerID="c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.722994 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8"} err="failed to get container status \"c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8\": rpc error: code = NotFound desc = could not find container \"c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8\": container with ID starting with c071b9694ef64b42d3a9cb1be327a69ce9141b0a55d52550600ee6420d2334a8 not found: ID does not exist" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.723019 4909 scope.go:117] "RemoveContainer" containerID="c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03" Nov 26 08:26:25 crc kubenswrapper[4909]: E1126 08:26:25.727526 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03\": container with ID starting with c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03 not found: ID does not exist" containerID="c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.727572 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03"} err="failed to get container status \"c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03\": rpc error: code = NotFound desc = could not find container \"c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03\": container with ID starting with c5a847a02ec845032205537ab273c22a88131643154c4e73e1bdca07d769cc03 not found: ID does not exist" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.727602 4909 scope.go:117] "RemoveContainer" containerID="0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3" Nov 26 08:26:25 crc kubenswrapper[4909]: E1126 08:26:25.727862 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3\": container with ID starting with 0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3 not found: ID does not exist" containerID="0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3" Nov 26 08:26:25 crc kubenswrapper[4909]: I1126 08:26:25.727977 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3"} err="failed to get container status \"0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3\": rpc error: code = NotFound desc = could not find container \"0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3\": container with ID starting with 0acd389a8cf2ed45adf3e4103f71a54a2e22fa4fc0f27dcd9bd21207773e54a3 not found: ID does not exist" Nov 26 08:26:26 crc kubenswrapper[4909]: I1126 08:26:26.510133 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" path="/var/lib/kubelet/pods/a7d88038-4a61-4f19-9054-4d018773f23b/volumes" Nov 26 08:26:30 crc kubenswrapper[4909]: I1126 08:26:30.498792 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:26:30 crc kubenswrapper[4909]: E1126 08:26:30.499630 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.980363 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lgjrg"] Nov 26 08:26:33 crc kubenswrapper[4909]: E1126 08:26:33.980948 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="registry-server" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.980960 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="registry-server" Nov 26 08:26:33 crc kubenswrapper[4909]: E1126 08:26:33.980972 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="extract-utilities" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.980977 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="extract-utilities" Nov 26 08:26:33 crc kubenswrapper[4909]: E1126 08:26:33.981002 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerName="init" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.981010 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerName="init" Nov 26 08:26:33 crc kubenswrapper[4909]: E1126 08:26:33.981034 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerName="dnsmasq-dns" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.981040 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerName="dnsmasq-dns" Nov 26 08:26:33 crc kubenswrapper[4909]: E1126 08:26:33.981062 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="extract-content" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.981068 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="extract-content" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.981213 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d88038-4a61-4f19-9054-4d018773f23b" containerName="registry-server" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.981236 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0c9935-9165-4a32-bd48-d4be99ebccbd" containerName="dnsmasq-dns" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.981815 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgjrg" Nov 26 08:26:33 crc kubenswrapper[4909]: I1126 08:26:33.993830 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lgjrg"] Nov 26 08:26:34 crc kubenswrapper[4909]: I1126 08:26:34.104937 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c852x\" (UniqueName: \"kubernetes.io/projected/3473da28-3496-4ca7-bf00-33062d75438f-kube-api-access-c852x\") pod \"neutron-db-create-lgjrg\" (UID: \"3473da28-3496-4ca7-bf00-33062d75438f\") " pod="openstack/neutron-db-create-lgjrg" Nov 26 08:26:34 crc kubenswrapper[4909]: I1126 08:26:34.206222 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c852x\" (UniqueName: \"kubernetes.io/projected/3473da28-3496-4ca7-bf00-33062d75438f-kube-api-access-c852x\") pod \"neutron-db-create-lgjrg\" (UID: \"3473da28-3496-4ca7-bf00-33062d75438f\") " pod="openstack/neutron-db-create-lgjrg" Nov 26 08:26:34 crc kubenswrapper[4909]: I1126 08:26:34.237282 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c852x\" (UniqueName: \"kubernetes.io/projected/3473da28-3496-4ca7-bf00-33062d75438f-kube-api-access-c852x\") pod \"neutron-db-create-lgjrg\" (UID: \"3473da28-3496-4ca7-bf00-33062d75438f\") " pod="openstack/neutron-db-create-lgjrg" Nov 26 08:26:34 crc kubenswrapper[4909]: I1126 08:26:34.300017 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgjrg" Nov 26 08:26:34 crc kubenswrapper[4909]: I1126 08:26:34.723795 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lgjrg"] Nov 26 08:26:34 crc kubenswrapper[4909]: W1126 08:26:34.728176 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3473da28_3496_4ca7_bf00_33062d75438f.slice/crio-43e169ff655f1b917a86421d4955e90d86c07bba8522619d3d679b00ff4ff699 WatchSource:0}: Error finding container 43e169ff655f1b917a86421d4955e90d86c07bba8522619d3d679b00ff4ff699: Status 404 returned error can't find the container with id 43e169ff655f1b917a86421d4955e90d86c07bba8522619d3d679b00ff4ff699 Nov 26 08:26:35 crc kubenswrapper[4909]: I1126 08:26:35.677983 4909 generic.go:334] "Generic (PLEG): container finished" podID="3473da28-3496-4ca7-bf00-33062d75438f" containerID="fc6afa17d6730e83eb9f9a629ec0d3084b061a99439285e0971933bd62519f50" exitCode=0 Nov 26 08:26:35 crc kubenswrapper[4909]: I1126 08:26:35.678041 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lgjrg" event={"ID":"3473da28-3496-4ca7-bf00-33062d75438f","Type":"ContainerDied","Data":"fc6afa17d6730e83eb9f9a629ec0d3084b061a99439285e0971933bd62519f50"} Nov 26 08:26:35 crc kubenswrapper[4909]: I1126 08:26:35.678071 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lgjrg" event={"ID":"3473da28-3496-4ca7-bf00-33062d75438f","Type":"ContainerStarted","Data":"43e169ff655f1b917a86421d4955e90d86c07bba8522619d3d679b00ff4ff699"} Nov 26 08:26:37 crc kubenswrapper[4909]: I1126 08:26:37.117416 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgjrg" Nov 26 08:26:37 crc kubenswrapper[4909]: I1126 08:26:37.157251 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c852x\" (UniqueName: \"kubernetes.io/projected/3473da28-3496-4ca7-bf00-33062d75438f-kube-api-access-c852x\") pod \"3473da28-3496-4ca7-bf00-33062d75438f\" (UID: \"3473da28-3496-4ca7-bf00-33062d75438f\") " Nov 26 08:26:37 crc kubenswrapper[4909]: I1126 08:26:37.162340 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3473da28-3496-4ca7-bf00-33062d75438f-kube-api-access-c852x" (OuterVolumeSpecName: "kube-api-access-c852x") pod "3473da28-3496-4ca7-bf00-33062d75438f" (UID: "3473da28-3496-4ca7-bf00-33062d75438f"). InnerVolumeSpecName "kube-api-access-c852x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:26:37 crc kubenswrapper[4909]: I1126 08:26:37.261007 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c852x\" (UniqueName: \"kubernetes.io/projected/3473da28-3496-4ca7-bf00-33062d75438f-kube-api-access-c852x\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:37 crc kubenswrapper[4909]: I1126 08:26:37.697781 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lgjrg" event={"ID":"3473da28-3496-4ca7-bf00-33062d75438f","Type":"ContainerDied","Data":"43e169ff655f1b917a86421d4955e90d86c07bba8522619d3d679b00ff4ff699"} Nov 26 08:26:37 crc kubenswrapper[4909]: I1126 08:26:37.697824 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43e169ff655f1b917a86421d4955e90d86c07bba8522619d3d679b00ff4ff699" Nov 26 08:26:37 crc kubenswrapper[4909]: I1126 08:26:37.697831 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgjrg" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.085497 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a2dd-account-create-pbnxh"] Nov 26 08:26:44 crc kubenswrapper[4909]: E1126 08:26:44.086542 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3473da28-3496-4ca7-bf00-33062d75438f" containerName="mariadb-database-create" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.086583 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3473da28-3496-4ca7-bf00-33062d75438f" containerName="mariadb-database-create" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.086833 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3473da28-3496-4ca7-bf00-33062d75438f" containerName="mariadb-database-create" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.087544 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a2dd-account-create-pbnxh" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.091381 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.116658 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a2dd-account-create-pbnxh"] Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.280894 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs7qx\" (UniqueName: \"kubernetes.io/projected/5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6-kube-api-access-zs7qx\") pod \"neutron-a2dd-account-create-pbnxh\" (UID: \"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6\") " pod="openstack/neutron-a2dd-account-create-pbnxh" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.384339 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs7qx\" (UniqueName: \"kubernetes.io/projected/5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6-kube-api-access-zs7qx\") pod \"neutron-a2dd-account-create-pbnxh\" (UID: \"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6\") " pod="openstack/neutron-a2dd-account-create-pbnxh" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.412775 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs7qx\" (UniqueName: \"kubernetes.io/projected/5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6-kube-api-access-zs7qx\") pod \"neutron-a2dd-account-create-pbnxh\" (UID: \"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6\") " pod="openstack/neutron-a2dd-account-create-pbnxh" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.419662 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a2dd-account-create-pbnxh" Nov 26 08:26:44 crc kubenswrapper[4909]: I1126 08:26:44.904134 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a2dd-account-create-pbnxh"] Nov 26 08:26:44 crc kubenswrapper[4909]: W1126 08:26:44.906256 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fd3b2e7_3f4a_4016_b530_a49e1e0a87c6.slice/crio-529b5eb5d447e54a026ddc8b9f955e6dc26cebdb76ad16c2cdf746bd385beb35 WatchSource:0}: Error finding container 529b5eb5d447e54a026ddc8b9f955e6dc26cebdb76ad16c2cdf746bd385beb35: Status 404 returned error can't find the container with id 529b5eb5d447e54a026ddc8b9f955e6dc26cebdb76ad16c2cdf746bd385beb35 Nov 26 08:26:45 crc kubenswrapper[4909]: I1126 08:26:45.499251 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:26:45 crc kubenswrapper[4909]: E1126 08:26:45.499974 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:26:45 crc kubenswrapper[4909]: I1126 08:26:45.785862 4909 generic.go:334] "Generic (PLEG): container finished" podID="5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6" containerID="c05af6cab615ccc15c9bb0cfd6a24f63508d820812aab49b09346124b9bd3914" exitCode=0 Nov 26 08:26:45 crc kubenswrapper[4909]: I1126 08:26:45.785951 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a2dd-account-create-pbnxh" event={"ID":"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6","Type":"ContainerDied","Data":"c05af6cab615ccc15c9bb0cfd6a24f63508d820812aab49b09346124b9bd3914"} Nov 26 08:26:45 crc kubenswrapper[4909]: I1126 08:26:45.786073 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a2dd-account-create-pbnxh" event={"ID":"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6","Type":"ContainerStarted","Data":"529b5eb5d447e54a026ddc8b9f955e6dc26cebdb76ad16c2cdf746bd385beb35"} Nov 26 08:26:47 crc kubenswrapper[4909]: I1126 08:26:47.157615 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a2dd-account-create-pbnxh" Nov 26 08:26:47 crc kubenswrapper[4909]: I1126 08:26:47.333541 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs7qx\" (UniqueName: \"kubernetes.io/projected/5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6-kube-api-access-zs7qx\") pod \"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6\" (UID: \"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6\") " Nov 26 08:26:47 crc kubenswrapper[4909]: I1126 08:26:47.339240 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6-kube-api-access-zs7qx" (OuterVolumeSpecName: "kube-api-access-zs7qx") pod "5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6" (UID: "5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6"). InnerVolumeSpecName "kube-api-access-zs7qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:26:47 crc kubenswrapper[4909]: I1126 08:26:47.435657 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs7qx\" (UniqueName: \"kubernetes.io/projected/5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6-kube-api-access-zs7qx\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:47 crc kubenswrapper[4909]: I1126 08:26:47.805717 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a2dd-account-create-pbnxh" event={"ID":"5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6","Type":"ContainerDied","Data":"529b5eb5d447e54a026ddc8b9f955e6dc26cebdb76ad16c2cdf746bd385beb35"} Nov 26 08:26:47 crc kubenswrapper[4909]: I1126 08:26:47.806062 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="529b5eb5d447e54a026ddc8b9f955e6dc26cebdb76ad16c2cdf746bd385beb35" Nov 26 08:26:47 crc kubenswrapper[4909]: I1126 08:26:47.805835 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a2dd-account-create-pbnxh" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.327363 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ncs2m"] Nov 26 08:26:49 crc kubenswrapper[4909]: E1126 08:26:49.328015 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6" containerName="mariadb-account-create" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.328027 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6" containerName="mariadb-account-create" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.328193 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6" containerName="mariadb-account-create" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.328744 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.330420 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.331135 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.338978 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-264q4" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.344299 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ncs2m"] Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.468813 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-combined-ca-bundle\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.468882 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77qw\" (UniqueName: \"kubernetes.io/projected/ddd33700-e3a8-408c-a906-8f26ee87dbb8-kube-api-access-d77qw\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.468942 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-config\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.570693 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-config\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.570937 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-combined-ca-bundle\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.571008 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d77qw\" (UniqueName: \"kubernetes.io/projected/ddd33700-e3a8-408c-a906-8f26ee87dbb8-kube-api-access-d77qw\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.577455 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-config\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.578268 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-combined-ca-bundle\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.589791 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d77qw\" (UniqueName: \"kubernetes.io/projected/ddd33700-e3a8-408c-a906-8f26ee87dbb8-kube-api-access-d77qw\") pod \"neutron-db-sync-ncs2m\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.653723 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:49 crc kubenswrapper[4909]: I1126 08:26:49.974395 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ncs2m"] Nov 26 08:26:50 crc kubenswrapper[4909]: W1126 08:26:50.468046 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd33700_e3a8_408c_a906_8f26ee87dbb8.slice/crio-785b3b50c52d7e8f03b963e7b18a7bee1440ea63e32362da16b62e1f71624531 WatchSource:0}: Error finding container 785b3b50c52d7e8f03b963e7b18a7bee1440ea63e32362da16b62e1f71624531: Status 404 returned error can't find the container with id 785b3b50c52d7e8f03b963e7b18a7bee1440ea63e32362da16b62e1f71624531 Nov 26 08:26:50 crc kubenswrapper[4909]: I1126 08:26:50.843270 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ncs2m" event={"ID":"ddd33700-e3a8-408c-a906-8f26ee87dbb8","Type":"ContainerStarted","Data":"b512a3a0c5b7b825c9d595fc05d34fcc60a710b503c26ea378c8f406dd86fad5"} Nov 26 08:26:50 crc kubenswrapper[4909]: I1126 08:26:50.843951 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ncs2m" event={"ID":"ddd33700-e3a8-408c-a906-8f26ee87dbb8","Type":"ContainerStarted","Data":"785b3b50c52d7e8f03b963e7b18a7bee1440ea63e32362da16b62e1f71624531"} Nov 26 08:26:50 crc kubenswrapper[4909]: I1126 08:26:50.864013 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ncs2m" podStartSLOduration=1.863990257 podStartE2EDuration="1.863990257s" podCreationTimestamp="2025-11-26 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:26:50.857718056 +0000 UTC m=+5183.003929222" watchObservedRunningTime="2025-11-26 08:26:50.863990257 +0000 UTC m=+5183.010201423" Nov 26 08:26:55 crc kubenswrapper[4909]: I1126 08:26:55.888109 4909 generic.go:334] "Generic (PLEG): container finished" podID="ddd33700-e3a8-408c-a906-8f26ee87dbb8" containerID="b512a3a0c5b7b825c9d595fc05d34fcc60a710b503c26ea378c8f406dd86fad5" exitCode=0 Nov 26 08:26:55 crc kubenswrapper[4909]: I1126 08:26:55.888286 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ncs2m" event={"ID":"ddd33700-e3a8-408c-a906-8f26ee87dbb8","Type":"ContainerDied","Data":"b512a3a0c5b7b825c9d595fc05d34fcc60a710b503c26ea378c8f406dd86fad5"} Nov 26 08:26:56 crc kubenswrapper[4909]: I1126 08:26:56.499368 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:26:56 crc kubenswrapper[4909]: E1126 08:26:56.499644 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.287978 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.470164 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-combined-ca-bundle\") pod \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.470221 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d77qw\" (UniqueName: \"kubernetes.io/projected/ddd33700-e3a8-408c-a906-8f26ee87dbb8-kube-api-access-d77qw\") pod \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.470285 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-config\") pod \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\" (UID: \"ddd33700-e3a8-408c-a906-8f26ee87dbb8\") " Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.476052 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd33700-e3a8-408c-a906-8f26ee87dbb8-kube-api-access-d77qw" (OuterVolumeSpecName: "kube-api-access-d77qw") pod "ddd33700-e3a8-408c-a906-8f26ee87dbb8" (UID: "ddd33700-e3a8-408c-a906-8f26ee87dbb8"). InnerVolumeSpecName "kube-api-access-d77qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.499715 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-config" (OuterVolumeSpecName: "config") pod "ddd33700-e3a8-408c-a906-8f26ee87dbb8" (UID: "ddd33700-e3a8-408c-a906-8f26ee87dbb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.500387 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddd33700-e3a8-408c-a906-8f26ee87dbb8" (UID: "ddd33700-e3a8-408c-a906-8f26ee87dbb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.572559 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.572606 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d77qw\" (UniqueName: \"kubernetes.io/projected/ddd33700-e3a8-408c-a906-8f26ee87dbb8-kube-api-access-d77qw\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.572620 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd33700-e3a8-408c-a906-8f26ee87dbb8-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.914857 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ncs2m" event={"ID":"ddd33700-e3a8-408c-a906-8f26ee87dbb8","Type":"ContainerDied","Data":"785b3b50c52d7e8f03b963e7b18a7bee1440ea63e32362da16b62e1f71624531"} Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.914900 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785b3b50c52d7e8f03b963e7b18a7bee1440ea63e32362da16b62e1f71624531" Nov 26 08:26:57 crc kubenswrapper[4909]: I1126 08:26:57.914961 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ncs2m" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.171314 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-94d77d5bf-ct248"] Nov 26 08:26:58 crc kubenswrapper[4909]: E1126 08:26:58.171697 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd33700-e3a8-408c-a906-8f26ee87dbb8" containerName="neutron-db-sync" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.171708 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd33700-e3a8-408c-a906-8f26ee87dbb8" containerName="neutron-db-sync" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.171860 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd33700-e3a8-408c-a906-8f26ee87dbb8" containerName="neutron-db-sync" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.177894 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.181798 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-sb\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.181841 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp7nm\" (UniqueName: \"kubernetes.io/projected/7d6894c7-d5b6-422d-b870-bf1116c593a1-kube-api-access-zp7nm\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.181907 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-dns-svc\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.181990 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-config\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.182029 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-nb\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.186061 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94d77d5bf-ct248"] Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.283207 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-sb\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.283252 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp7nm\" (UniqueName: \"kubernetes.io/projected/7d6894c7-d5b6-422d-b870-bf1116c593a1-kube-api-access-zp7nm\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.283281 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-dns-svc\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.283343 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-config\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.283362 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-nb\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.284312 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-sb\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.284322 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-config\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.285927 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-nb\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.288137 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-dns-svc\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.304453 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp7nm\" (UniqueName: \"kubernetes.io/projected/7d6894c7-d5b6-422d-b870-bf1116c593a1-kube-api-access-zp7nm\") pod \"dnsmasq-dns-94d77d5bf-ct248\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.323788 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b4496fbbf-ngkvc"] Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.325147 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.330384 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-264q4" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.330650 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.341947 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.383221 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4496fbbf-ngkvc"] Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.385020 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-combined-ca-bundle\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.399203 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-config\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.399411 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stqh4\" (UniqueName: \"kubernetes.io/projected/3a087136-4700-48b9-b87c-0bc79ca50f55-kube-api-access-stqh4\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.399572 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-httpd-config\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.504879 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.505242 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-combined-ca-bundle\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.505294 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-config\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.505367 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stqh4\" (UniqueName: \"kubernetes.io/projected/3a087136-4700-48b9-b87c-0bc79ca50f55-kube-api-access-stqh4\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.505421 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-httpd-config\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.518639 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-httpd-config\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.518768 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-config\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.521181 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a087136-4700-48b9-b87c-0bc79ca50f55-combined-ca-bundle\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.528909 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stqh4\" (UniqueName: \"kubernetes.io/projected/3a087136-4700-48b9-b87c-0bc79ca50f55-kube-api-access-stqh4\") pod \"neutron-7b4496fbbf-ngkvc\" (UID: \"3a087136-4700-48b9-b87c-0bc79ca50f55\") " pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:58 crc kubenswrapper[4909]: I1126 08:26:58.705558 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.035395 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94d77d5bf-ct248"] Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.272058 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4496fbbf-ngkvc"] Nov 26 08:26:59 crc kubenswrapper[4909]: W1126 08:26:59.274404 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a087136_4700_48b9_b87c_0bc79ca50f55.slice/crio-c7400fc9ca09927c468ee90f6ca4e96d3fa8fae11e3de142b03e89e13f9652f0 WatchSource:0}: Error finding container c7400fc9ca09927c468ee90f6ca4e96d3fa8fae11e3de142b03e89e13f9652f0: Status 404 returned error can't find the container with id c7400fc9ca09927c468ee90f6ca4e96d3fa8fae11e3de142b03e89e13f9652f0 Nov 26 08:26:59 crc kubenswrapper[4909]: E1126 08:26:59.407166 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d6894c7_d5b6_422d_b870_bf1116c593a1.slice/crio-conmon-e24b8699d2ae09e9299c30e2570f787232f609dcb667cd3fe07da58a0e151085.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d6894c7_d5b6_422d_b870_bf1116c593a1.slice/crio-e24b8699d2ae09e9299c30e2570f787232f609dcb667cd3fe07da58a0e151085.scope\": RecentStats: unable to find data in memory cache]" Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.936893 4909 generic.go:334] "Generic (PLEG): container finished" podID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerID="e24b8699d2ae09e9299c30e2570f787232f609dcb667cd3fe07da58a0e151085" exitCode=0 Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.937607 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" event={"ID":"7d6894c7-d5b6-422d-b870-bf1116c593a1","Type":"ContainerDied","Data":"e24b8699d2ae09e9299c30e2570f787232f609dcb667cd3fe07da58a0e151085"} Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.937650 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" event={"ID":"7d6894c7-d5b6-422d-b870-bf1116c593a1","Type":"ContainerStarted","Data":"eb180194e01eeebaf781cd9de3591473dba801b650db3b7848ce3a75142eb8bb"} Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.942311 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4496fbbf-ngkvc" event={"ID":"3a087136-4700-48b9-b87c-0bc79ca50f55","Type":"ContainerStarted","Data":"c85eaa87602ecd138e6835d992136c0de5d8f06dd8532932847c6e43a711b1fa"} Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.944721 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.944756 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4496fbbf-ngkvc" event={"ID":"3a087136-4700-48b9-b87c-0bc79ca50f55","Type":"ContainerStarted","Data":"be582bd6358bc55c2c9fbf5942966d0bea4cf001624ddff7a7961faa6e007c83"} Nov 26 08:26:59 crc kubenswrapper[4909]: I1126 08:26:59.944775 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4496fbbf-ngkvc" event={"ID":"3a087136-4700-48b9-b87c-0bc79ca50f55","Type":"ContainerStarted","Data":"c7400fc9ca09927c468ee90f6ca4e96d3fa8fae11e3de142b03e89e13f9652f0"} Nov 26 08:27:00 crc kubenswrapper[4909]: I1126 08:27:00.014224 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b4496fbbf-ngkvc" podStartSLOduration=2.014198871 podStartE2EDuration="2.014198871s" podCreationTimestamp="2025-11-26 08:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:27:00.007229911 +0000 UTC m=+5192.153441087" watchObservedRunningTime="2025-11-26 08:27:00.014198871 +0000 UTC m=+5192.160410027" Nov 26 08:27:00 crc kubenswrapper[4909]: I1126 08:27:00.951095 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" event={"ID":"7d6894c7-d5b6-422d-b870-bf1116c593a1","Type":"ContainerStarted","Data":"1425eb56e2a6641f1f2752d67adaf1a84dc696aef0b549cd423bd9a3f8731867"} Nov 26 08:27:01 crc kubenswrapper[4909]: I1126 08:27:01.962781 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:27:07 crc kubenswrapper[4909]: I1126 08:27:07.498907 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:27:07 crc kubenswrapper[4909]: E1126 08:27:07.499380 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:27:08 crc kubenswrapper[4909]: I1126 08:27:08.511109 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:27:08 crc kubenswrapper[4909]: I1126 08:27:08.542099 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" podStartSLOduration=10.542073462 podStartE2EDuration="10.542073462s" podCreationTimestamp="2025-11-26 08:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:27:00.968617001 +0000 UTC m=+5193.114828177" watchObservedRunningTime="2025-11-26 08:27:08.542073462 +0000 UTC m=+5200.688284638" Nov 26 08:27:08 crc kubenswrapper[4909]: I1126 08:27:08.580782 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869545f9c9-j228l"] Nov 26 08:27:08 crc kubenswrapper[4909]: I1126 08:27:08.581003 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869545f9c9-j228l" podUID="712a969d-2f65-4c0b-8550-913402cdee55" containerName="dnsmasq-dns" containerID="cri-o://19a5d6088d90508de72937a33388b08098db136345f6ccd9b15511bea65f4d5a" gracePeriod=10 Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.029237 4909 generic.go:334] "Generic (PLEG): container finished" podID="712a969d-2f65-4c0b-8550-913402cdee55" containerID="19a5d6088d90508de72937a33388b08098db136345f6ccd9b15511bea65f4d5a" exitCode=0 Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.029390 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869545f9c9-j228l" event={"ID":"712a969d-2f65-4c0b-8550-913402cdee55","Type":"ContainerDied","Data":"19a5d6088d90508de72937a33388b08098db136345f6ccd9b15511bea65f4d5a"} Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.118208 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.206181 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-dns-svc\") pod \"712a969d-2f65-4c0b-8550-913402cdee55\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.206513 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwssj\" (UniqueName: \"kubernetes.io/projected/712a969d-2f65-4c0b-8550-913402cdee55-kube-api-access-fwssj\") pod \"712a969d-2f65-4c0b-8550-913402cdee55\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.206643 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-sb\") pod \"712a969d-2f65-4c0b-8550-913402cdee55\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.206729 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-config\") pod \"712a969d-2f65-4c0b-8550-913402cdee55\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.206840 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-nb\") pod \"712a969d-2f65-4c0b-8550-913402cdee55\" (UID: \"712a969d-2f65-4c0b-8550-913402cdee55\") " Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.214254 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712a969d-2f65-4c0b-8550-913402cdee55-kube-api-access-fwssj" (OuterVolumeSpecName: "kube-api-access-fwssj") pod "712a969d-2f65-4c0b-8550-913402cdee55" (UID: "712a969d-2f65-4c0b-8550-913402cdee55"). InnerVolumeSpecName "kube-api-access-fwssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.257529 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "712a969d-2f65-4c0b-8550-913402cdee55" (UID: "712a969d-2f65-4c0b-8550-913402cdee55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.263566 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "712a969d-2f65-4c0b-8550-913402cdee55" (UID: "712a969d-2f65-4c0b-8550-913402cdee55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.267382 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "712a969d-2f65-4c0b-8550-913402cdee55" (UID: "712a969d-2f65-4c0b-8550-913402cdee55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.270268 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-config" (OuterVolumeSpecName: "config") pod "712a969d-2f65-4c0b-8550-913402cdee55" (UID: "712a969d-2f65-4c0b-8550-913402cdee55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.319827 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwssj\" (UniqueName: \"kubernetes.io/projected/712a969d-2f65-4c0b-8550-913402cdee55-kube-api-access-fwssj\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.319870 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.319880 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.319889 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:09 crc kubenswrapper[4909]: I1126 08:27:09.319897 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/712a969d-2f65-4c0b-8550-913402cdee55-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:10 crc kubenswrapper[4909]: I1126 08:27:10.038848 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869545f9c9-j228l" event={"ID":"712a969d-2f65-4c0b-8550-913402cdee55","Type":"ContainerDied","Data":"a0f26cffd3c192db84a8d9adeae245f660a2157bdfde558325758662db4b1cab"} Nov 26 08:27:10 crc kubenswrapper[4909]: I1126 08:27:10.038948 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869545f9c9-j228l" Nov 26 08:27:10 crc kubenswrapper[4909]: I1126 08:27:10.039258 4909 scope.go:117] "RemoveContainer" containerID="19a5d6088d90508de72937a33388b08098db136345f6ccd9b15511bea65f4d5a" Nov 26 08:27:10 crc kubenswrapper[4909]: I1126 08:27:10.069019 4909 scope.go:117] "RemoveContainer" containerID="b1e21dfb82bce7be98f25b08d6064496858efad056f98ea1bb5345f43b8c1d62" Nov 26 08:27:10 crc kubenswrapper[4909]: I1126 08:27:10.070617 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869545f9c9-j228l"] Nov 26 08:27:10 crc kubenswrapper[4909]: I1126 08:27:10.079123 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869545f9c9-j228l"] Nov 26 08:27:10 crc kubenswrapper[4909]: I1126 08:27:10.511297 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712a969d-2f65-4c0b-8550-913402cdee55" path="/var/lib/kubelet/pods/712a969d-2f65-4c0b-8550-913402cdee55/volumes" Nov 26 08:27:18 crc kubenswrapper[4909]: I1126 08:27:18.506973 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:27:18 crc kubenswrapper[4909]: E1126 08:27:18.507915 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:27:28 crc kubenswrapper[4909]: I1126 08:27:28.712651 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b4496fbbf-ngkvc" Nov 26 08:27:29 crc kubenswrapper[4909]: I1126 08:27:29.499287 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:27:29 crc kubenswrapper[4909]: E1126 08:27:29.500154 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:27:35 crc kubenswrapper[4909]: I1126 08:27:35.934739 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qb2cf"] Nov 26 08:27:35 crc kubenswrapper[4909]: E1126 08:27:35.935792 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712a969d-2f65-4c0b-8550-913402cdee55" containerName="init" Nov 26 08:27:35 crc kubenswrapper[4909]: I1126 08:27:35.935813 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="712a969d-2f65-4c0b-8550-913402cdee55" containerName="init" Nov 26 08:27:35 crc kubenswrapper[4909]: E1126 08:27:35.935891 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712a969d-2f65-4c0b-8550-913402cdee55" containerName="dnsmasq-dns" Nov 26 08:27:35 crc kubenswrapper[4909]: I1126 08:27:35.935904 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="712a969d-2f65-4c0b-8550-913402cdee55" containerName="dnsmasq-dns" Nov 26 08:27:35 crc kubenswrapper[4909]: I1126 08:27:35.936202 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="712a969d-2f65-4c0b-8550-913402cdee55" containerName="dnsmasq-dns" Nov 26 08:27:35 crc kubenswrapper[4909]: I1126 08:27:35.937160 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qb2cf" Nov 26 08:27:35 crc kubenswrapper[4909]: I1126 08:27:35.949536 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qb2cf"] Nov 26 08:27:36 crc kubenswrapper[4909]: I1126 08:27:36.071878 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhrv\" (UniqueName: \"kubernetes.io/projected/be893ffe-0db2-4130-bff6-50da7ff31f66-kube-api-access-qbhrv\") pod \"glance-db-create-qb2cf\" (UID: \"be893ffe-0db2-4130-bff6-50da7ff31f66\") " pod="openstack/glance-db-create-qb2cf" Nov 26 08:27:36 crc kubenswrapper[4909]: I1126 08:27:36.174153 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbhrv\" (UniqueName: \"kubernetes.io/projected/be893ffe-0db2-4130-bff6-50da7ff31f66-kube-api-access-qbhrv\") pod \"glance-db-create-qb2cf\" (UID: \"be893ffe-0db2-4130-bff6-50da7ff31f66\") " pod="openstack/glance-db-create-qb2cf" Nov 26 08:27:36 crc kubenswrapper[4909]: I1126 08:27:36.194265 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbhrv\" (UniqueName: \"kubernetes.io/projected/be893ffe-0db2-4130-bff6-50da7ff31f66-kube-api-access-qbhrv\") pod \"glance-db-create-qb2cf\" (UID: \"be893ffe-0db2-4130-bff6-50da7ff31f66\") " pod="openstack/glance-db-create-qb2cf" Nov 26 08:27:36 crc kubenswrapper[4909]: I1126 08:27:36.303086 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qb2cf" Nov 26 08:27:36 crc kubenswrapper[4909]: I1126 08:27:36.736967 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qb2cf"] Nov 26 08:27:36 crc kubenswrapper[4909]: W1126 08:27:36.744815 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe893ffe_0db2_4130_bff6_50da7ff31f66.slice/crio-378d90ec4445bed003febf9903888ba44c2ffff05055d1fb2a31ccb07c662896 WatchSource:0}: Error finding container 378d90ec4445bed003febf9903888ba44c2ffff05055d1fb2a31ccb07c662896: Status 404 returned error can't find the container with id 378d90ec4445bed003febf9903888ba44c2ffff05055d1fb2a31ccb07c662896 Nov 26 08:27:37 crc kubenswrapper[4909]: I1126 08:27:37.278580 4909 generic.go:334] "Generic (PLEG): container finished" podID="be893ffe-0db2-4130-bff6-50da7ff31f66" containerID="12bfc7c0a0669733352f097155500512db2747ed77dec1c4ad9d293a65242816" exitCode=0 Nov 26 08:27:37 crc kubenswrapper[4909]: I1126 08:27:37.278938 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qb2cf" event={"ID":"be893ffe-0db2-4130-bff6-50da7ff31f66","Type":"ContainerDied","Data":"12bfc7c0a0669733352f097155500512db2747ed77dec1c4ad9d293a65242816"} Nov 26 08:27:37 crc kubenswrapper[4909]: I1126 08:27:37.278967 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qb2cf" event={"ID":"be893ffe-0db2-4130-bff6-50da7ff31f66","Type":"ContainerStarted","Data":"378d90ec4445bed003febf9903888ba44c2ffff05055d1fb2a31ccb07c662896"} Nov 26 08:27:38 crc kubenswrapper[4909]: I1126 08:27:38.632790 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qb2cf" Nov 26 08:27:38 crc kubenswrapper[4909]: I1126 08:27:38.717230 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbhrv\" (UniqueName: \"kubernetes.io/projected/be893ffe-0db2-4130-bff6-50da7ff31f66-kube-api-access-qbhrv\") pod \"be893ffe-0db2-4130-bff6-50da7ff31f66\" (UID: \"be893ffe-0db2-4130-bff6-50da7ff31f66\") " Nov 26 08:27:38 crc kubenswrapper[4909]: I1126 08:27:38.735704 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be893ffe-0db2-4130-bff6-50da7ff31f66-kube-api-access-qbhrv" (OuterVolumeSpecName: "kube-api-access-qbhrv") pod "be893ffe-0db2-4130-bff6-50da7ff31f66" (UID: "be893ffe-0db2-4130-bff6-50da7ff31f66"). InnerVolumeSpecName "kube-api-access-qbhrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:27:38 crc kubenswrapper[4909]: I1126 08:27:38.818925 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbhrv\" (UniqueName: \"kubernetes.io/projected/be893ffe-0db2-4130-bff6-50da7ff31f66-kube-api-access-qbhrv\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:39 crc kubenswrapper[4909]: I1126 08:27:39.296716 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qb2cf" event={"ID":"be893ffe-0db2-4130-bff6-50da7ff31f66","Type":"ContainerDied","Data":"378d90ec4445bed003febf9903888ba44c2ffff05055d1fb2a31ccb07c662896"} Nov 26 08:27:39 crc kubenswrapper[4909]: I1126 08:27:39.296754 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="378d90ec4445bed003febf9903888ba44c2ffff05055d1fb2a31ccb07c662896" Nov 26 08:27:39 crc kubenswrapper[4909]: I1126 08:27:39.296793 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qb2cf" Nov 26 08:27:40 crc kubenswrapper[4909]: I1126 08:27:40.499622 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:27:40 crc kubenswrapper[4909]: E1126 08:27:40.501023 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.021808 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c56e-account-create-w5dkn"] Nov 26 08:27:46 crc kubenswrapper[4909]: E1126 08:27:46.022666 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be893ffe-0db2-4130-bff6-50da7ff31f66" containerName="mariadb-database-create" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.022677 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="be893ffe-0db2-4130-bff6-50da7ff31f66" containerName="mariadb-database-create" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.022836 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="be893ffe-0db2-4130-bff6-50da7ff31f66" containerName="mariadb-database-create" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.023338 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c56e-account-create-w5dkn" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.026545 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.035987 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c56e-account-create-w5dkn"] Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.164258 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2z46\" (UniqueName: \"kubernetes.io/projected/25c4aa50-bced-4761-984b-ff6e82851af7-kube-api-access-l2z46\") pod \"glance-c56e-account-create-w5dkn\" (UID: \"25c4aa50-bced-4761-984b-ff6e82851af7\") " pod="openstack/glance-c56e-account-create-w5dkn" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.266844 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2z46\" (UniqueName: \"kubernetes.io/projected/25c4aa50-bced-4761-984b-ff6e82851af7-kube-api-access-l2z46\") pod \"glance-c56e-account-create-w5dkn\" (UID: \"25c4aa50-bced-4761-984b-ff6e82851af7\") " pod="openstack/glance-c56e-account-create-w5dkn" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.294776 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2z46\" (UniqueName: \"kubernetes.io/projected/25c4aa50-bced-4761-984b-ff6e82851af7-kube-api-access-l2z46\") pod \"glance-c56e-account-create-w5dkn\" (UID: \"25c4aa50-bced-4761-984b-ff6e82851af7\") " pod="openstack/glance-c56e-account-create-w5dkn" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.364769 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c56e-account-create-w5dkn" Nov 26 08:27:46 crc kubenswrapper[4909]: I1126 08:27:46.802781 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c56e-account-create-w5dkn"] Nov 26 08:27:47 crc kubenswrapper[4909]: I1126 08:27:47.371139 4909 generic.go:334] "Generic (PLEG): container finished" podID="25c4aa50-bced-4761-984b-ff6e82851af7" containerID="3828f0117bdac39cc672babf5bfaa4b6ec002526082fec083694ec9dd90c28ba" exitCode=0 Nov 26 08:27:47 crc kubenswrapper[4909]: I1126 08:27:47.371510 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c56e-account-create-w5dkn" event={"ID":"25c4aa50-bced-4761-984b-ff6e82851af7","Type":"ContainerDied","Data":"3828f0117bdac39cc672babf5bfaa4b6ec002526082fec083694ec9dd90c28ba"} Nov 26 08:27:47 crc kubenswrapper[4909]: I1126 08:27:47.371543 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c56e-account-create-w5dkn" event={"ID":"25c4aa50-bced-4761-984b-ff6e82851af7","Type":"ContainerStarted","Data":"d55c839abefb2f172ed2f8cbbcabdcc95c920214c1092b8d54738351b17e5763"} Nov 26 08:27:48 crc kubenswrapper[4909]: I1126 08:27:48.776514 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c56e-account-create-w5dkn" Nov 26 08:27:48 crc kubenswrapper[4909]: I1126 08:27:48.813999 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2z46\" (UniqueName: \"kubernetes.io/projected/25c4aa50-bced-4761-984b-ff6e82851af7-kube-api-access-l2z46\") pod \"25c4aa50-bced-4761-984b-ff6e82851af7\" (UID: \"25c4aa50-bced-4761-984b-ff6e82851af7\") " Nov 26 08:27:48 crc kubenswrapper[4909]: I1126 08:27:48.820275 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c4aa50-bced-4761-984b-ff6e82851af7-kube-api-access-l2z46" (OuterVolumeSpecName: "kube-api-access-l2z46") pod "25c4aa50-bced-4761-984b-ff6e82851af7" (UID: "25c4aa50-bced-4761-984b-ff6e82851af7"). InnerVolumeSpecName "kube-api-access-l2z46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:27:48 crc kubenswrapper[4909]: I1126 08:27:48.977937 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2z46\" (UniqueName: \"kubernetes.io/projected/25c4aa50-bced-4761-984b-ff6e82851af7-kube-api-access-l2z46\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:49 crc kubenswrapper[4909]: I1126 08:27:49.392716 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c56e-account-create-w5dkn" event={"ID":"25c4aa50-bced-4761-984b-ff6e82851af7","Type":"ContainerDied","Data":"d55c839abefb2f172ed2f8cbbcabdcc95c920214c1092b8d54738351b17e5763"} Nov 26 08:27:49 crc kubenswrapper[4909]: I1126 08:27:49.392772 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d55c839abefb2f172ed2f8cbbcabdcc95c920214c1092b8d54738351b17e5763" Nov 26 08:27:49 crc kubenswrapper[4909]: I1126 08:27:49.393090 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c56e-account-create-w5dkn" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.162300 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jstcz"] Nov 26 08:27:51 crc kubenswrapper[4909]: E1126 08:27:51.163063 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c4aa50-bced-4761-984b-ff6e82851af7" containerName="mariadb-account-create" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.163079 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c4aa50-bced-4761-984b-ff6e82851af7" containerName="mariadb-account-create" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.163317 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c4aa50-bced-4761-984b-ff6e82851af7" containerName="mariadb-account-create" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.166432 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.170132 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dpn5l" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.170269 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.187338 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jstcz"] Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.321284 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-config-data\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.321489 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-db-sync-config-data\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.321649 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fhkf\" (UniqueName: \"kubernetes.io/projected/93b07df5-f911-4f65-bc31-da797bb04f2e-kube-api-access-5fhkf\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.321706 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-combined-ca-bundle\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.423558 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-config-data\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.423757 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-db-sync-config-data\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.423829 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fhkf\" (UniqueName: \"kubernetes.io/projected/93b07df5-f911-4f65-bc31-da797bb04f2e-kube-api-access-5fhkf\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.423859 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-combined-ca-bundle\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.433044 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-config-data\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.440509 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-db-sync-config-data\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.441845 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-combined-ca-bundle\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.444769 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fhkf\" (UniqueName: \"kubernetes.io/projected/93b07df5-f911-4f65-bc31-da797bb04f2e-kube-api-access-5fhkf\") pod \"glance-db-sync-jstcz\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.489053 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:51 crc kubenswrapper[4909]: I1126 08:27:51.499164 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:27:51 crc kubenswrapper[4909]: E1126 08:27:51.499409 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:27:52 crc kubenswrapper[4909]: I1126 08:27:52.035973 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jstcz"] Nov 26 08:27:52 crc kubenswrapper[4909]: I1126 08:27:52.418079 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jstcz" event={"ID":"93b07df5-f911-4f65-bc31-da797bb04f2e","Type":"ContainerStarted","Data":"2085526abe8d91eaec2db5bbffe75b77da6c835b7e2078af8c2077b9370cee5c"} Nov 26 08:27:53 crc kubenswrapper[4909]: I1126 08:27:53.429291 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jstcz" event={"ID":"93b07df5-f911-4f65-bc31-da797bb04f2e","Type":"ContainerStarted","Data":"8b4dd1d812494a82df935bf46e840b046bf1432bee728c6b29767c769a344f59"} Nov 26 08:27:53 crc kubenswrapper[4909]: I1126 08:27:53.446986 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jstcz" podStartSLOduration=2.446955185 podStartE2EDuration="2.446955185s" podCreationTimestamp="2025-11-26 08:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:27:53.443473831 +0000 UTC m=+5245.589684997" watchObservedRunningTime="2025-11-26 08:27:53.446955185 +0000 UTC m=+5245.593166351" Nov 26 08:27:56 crc kubenswrapper[4909]: I1126 08:27:56.456908 4909 generic.go:334] "Generic (PLEG): container finished" podID="93b07df5-f911-4f65-bc31-da797bb04f2e" containerID="8b4dd1d812494a82df935bf46e840b046bf1432bee728c6b29767c769a344f59" exitCode=0 Nov 26 08:27:56 crc kubenswrapper[4909]: I1126 08:27:56.456975 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jstcz" event={"ID":"93b07df5-f911-4f65-bc31-da797bb04f2e","Type":"ContainerDied","Data":"8b4dd1d812494a82df935bf46e840b046bf1432bee728c6b29767c769a344f59"} Nov 26 08:27:57 crc kubenswrapper[4909]: I1126 08:27:57.915468 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.044051 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-combined-ca-bundle\") pod \"93b07df5-f911-4f65-bc31-da797bb04f2e\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.044122 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fhkf\" (UniqueName: \"kubernetes.io/projected/93b07df5-f911-4f65-bc31-da797bb04f2e-kube-api-access-5fhkf\") pod \"93b07df5-f911-4f65-bc31-da797bb04f2e\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.044215 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-db-sync-config-data\") pod \"93b07df5-f911-4f65-bc31-da797bb04f2e\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.044242 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-config-data\") pod \"93b07df5-f911-4f65-bc31-da797bb04f2e\" (UID: \"93b07df5-f911-4f65-bc31-da797bb04f2e\") " Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.052118 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "93b07df5-f911-4f65-bc31-da797bb04f2e" (UID: "93b07df5-f911-4f65-bc31-da797bb04f2e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.064050 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b07df5-f911-4f65-bc31-da797bb04f2e-kube-api-access-5fhkf" (OuterVolumeSpecName: "kube-api-access-5fhkf") pod "93b07df5-f911-4f65-bc31-da797bb04f2e" (UID: "93b07df5-f911-4f65-bc31-da797bb04f2e"). InnerVolumeSpecName "kube-api-access-5fhkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.069460 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93b07df5-f911-4f65-bc31-da797bb04f2e" (UID: "93b07df5-f911-4f65-bc31-da797bb04f2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.098093 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-config-data" (OuterVolumeSpecName: "config-data") pod "93b07df5-f911-4f65-bc31-da797bb04f2e" (UID: "93b07df5-f911-4f65-bc31-da797bb04f2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.146687 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.146732 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fhkf\" (UniqueName: \"kubernetes.io/projected/93b07df5-f911-4f65-bc31-da797bb04f2e-kube-api-access-5fhkf\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.146754 4909 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.146771 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b07df5-f911-4f65-bc31-da797bb04f2e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.476834 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jstcz" event={"ID":"93b07df5-f911-4f65-bc31-da797bb04f2e","Type":"ContainerDied","Data":"2085526abe8d91eaec2db5bbffe75b77da6c835b7e2078af8c2077b9370cee5c"} Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.477190 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2085526abe8d91eaec2db5bbffe75b77da6c835b7e2078af8c2077b9370cee5c" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.477039 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jstcz" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.810550 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:27:58 crc kubenswrapper[4909]: E1126 08:27:58.816407 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b07df5-f911-4f65-bc31-da797bb04f2e" containerName="glance-db-sync" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.816458 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b07df5-f911-4f65-bc31-da797bb04f2e" containerName="glance-db-sync" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.816997 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b07df5-f911-4f65-bc31-da797bb04f2e" containerName="glance-db-sync" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.818633 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.826441 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.826715 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dpn5l" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.826905 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.827049 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.849272 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.917853 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8565f7649c-4pftv"] Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.923824 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.945352 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8565f7649c-4pftv"] Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.961446 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.961497 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.961528 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5ml\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-kube-api-access-kh5ml\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.961561 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-logs\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.961611 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.961665 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:58 crc kubenswrapper[4909]: I1126 08:27:58.961690 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.062817 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-config\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.062944 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.062995 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063032 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-sb\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063073 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063109 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063149 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh5ml\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-kube-api-access-kh5ml\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063197 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-logs\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063233 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbvf\" (UniqueName: \"kubernetes.io/projected/19e3bd57-e081-4df9-adeb-06954615dd51-kube-api-access-2cbvf\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063295 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-nb\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063326 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-dns-svc\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.063366 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.066090 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.067517 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-logs\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.069393 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.070271 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.070754 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.075629 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.084207 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.086337 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh5ml\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-kube-api-access-kh5ml\") pod \"glance-default-external-api-0\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.087589 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.093223 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.107444 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.149741 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.165123 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-config\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.165222 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-sb\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.165298 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cbvf\" (UniqueName: \"kubernetes.io/projected/19e3bd57-e081-4df9-adeb-06954615dd51-kube-api-access-2cbvf\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.165326 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-nb\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.165344 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-dns-svc\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.166204 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-sb\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.168208 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-nb\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.168235 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-dns-svc\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.169713 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-config\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.194404 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cbvf\" (UniqueName: \"kubernetes.io/projected/19e3bd57-e081-4df9-adeb-06954615dd51-kube-api-access-2cbvf\") pod \"dnsmasq-dns-8565f7649c-4pftv\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.246033 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.266884 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.266958 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.267002 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.267028 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.267054 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.267141 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7qp\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-kube-api-access-nz7qp\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.267170 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.368402 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7qp\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-kube-api-access-nz7qp\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.368737 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.368764 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.368827 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.368873 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.368902 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.368927 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.369382 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.369739 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.375350 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.377139 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.383580 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.392002 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.400961 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7qp\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-kube-api-access-nz7qp\") pod \"glance-default-internal-api-0\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.489461 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.788641 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:27:59 crc kubenswrapper[4909]: I1126 08:27:59.815790 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8565f7649c-4pftv"] Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.011703 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.152237 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:28:00 crc kubenswrapper[4909]: W1126 08:28:00.213957 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67a749e0_4d43_4fde_b60c_7b6272ef33f2.slice/crio-3ba79e02806e1481f59769c28947a7293089ed0ec7b689be6c72d9ecd630647e WatchSource:0}: Error finding container 3ba79e02806e1481f59769c28947a7293089ed0ec7b689be6c72d9ecd630647e: Status 404 returned error can't find the container with id 3ba79e02806e1481f59769c28947a7293089ed0ec7b689be6c72d9ecd630647e Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.509139 4909 generic.go:334] "Generic (PLEG): container finished" podID="19e3bd57-e081-4df9-adeb-06954615dd51" containerID="40523a2f379875e2b1b8986a73c606cbd75f1cb72a6f7d2a8b102fc83d2f5186" exitCode=0 Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.531880 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" event={"ID":"19e3bd57-e081-4df9-adeb-06954615dd51","Type":"ContainerDied","Data":"40523a2f379875e2b1b8986a73c606cbd75f1cb72a6f7d2a8b102fc83d2f5186"} Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.531948 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" event={"ID":"19e3bd57-e081-4df9-adeb-06954615dd51","Type":"ContainerStarted","Data":"eef39ea9d5ac22b0449fe3bc76a0956938ba3a90795736d5f4a466197e67d94d"} Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.531963 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67a749e0-4d43-4fde-b60c-7b6272ef33f2","Type":"ContainerStarted","Data":"3ba79e02806e1481f59769c28947a7293089ed0ec7b689be6c72d9ecd630647e"} Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.531974 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8efb99ca-7ccd-466b-a6dc-f4288398b4c8","Type":"ContainerStarted","Data":"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9"} Nov 26 08:28:00 crc kubenswrapper[4909]: I1126 08:28:00.531984 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8efb99ca-7ccd-466b-a6dc-f4288398b4c8","Type":"ContainerStarted","Data":"6514fa93caf4bd17aeccd3c7436cacc2373837819bb414869a18c75b6b02213c"} Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.530044 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67a749e0-4d43-4fde-b60c-7b6272ef33f2","Type":"ContainerStarted","Data":"2fb053185025aff549854f3f80b21fb1d98bd7034de9e25d0c3a68c74561a071"} Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.530634 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67a749e0-4d43-4fde-b60c-7b6272ef33f2","Type":"ContainerStarted","Data":"218cb84eb8cbfe364345fdb69288402a5e80d13fc0785a67395917d49d2a7b7a"} Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.534006 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8efb99ca-7ccd-466b-a6dc-f4288398b4c8","Type":"ContainerStarted","Data":"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16"} Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.534137 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-log" containerID="cri-o://790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9" gracePeriod=30 Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.534161 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-httpd" containerID="cri-o://762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16" gracePeriod=30 Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.536503 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" event={"ID":"19e3bd57-e081-4df9-adeb-06954615dd51","Type":"ContainerStarted","Data":"b35da79673a7fead54a139acec62b4087487cacb593381a97ea1078371731d73"} Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.536846 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.572704 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.572684336 podStartE2EDuration="2.572684336s" podCreationTimestamp="2025-11-26 08:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:01.557278805 +0000 UTC m=+5253.703489971" watchObservedRunningTime="2025-11-26 08:28:01.572684336 +0000 UTC m=+5253.718895502" Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.586913 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" podStartSLOduration=3.586892364 podStartE2EDuration="3.586892364s" podCreationTimestamp="2025-11-26 08:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:01.585861795 +0000 UTC m=+5253.732072961" watchObservedRunningTime="2025-11-26 08:28:01.586892364 +0000 UTC m=+5253.733103530" Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.608829 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.608812783 podStartE2EDuration="3.608812783s" podCreationTimestamp="2025-11-26 08:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:01.604150825 +0000 UTC m=+5253.750361991" watchObservedRunningTime="2025-11-26 08:28:01.608812783 +0000 UTC m=+5253.755023939" Nov 26 08:28:01 crc kubenswrapper[4909]: I1126 08:28:01.924270 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.170926 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.322509 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-ceph\") pod \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.322576 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-scripts\") pod \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.322636 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-config-data\") pod \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.322696 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-combined-ca-bundle\") pod \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.322837 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-httpd-run\") pod \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.322874 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh5ml\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-kube-api-access-kh5ml\") pod \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.322910 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-logs\") pod \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\" (UID: \"8efb99ca-7ccd-466b-a6dc-f4288398b4c8\") " Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.323956 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8efb99ca-7ccd-466b-a6dc-f4288398b4c8" (UID: "8efb99ca-7ccd-466b-a6dc-f4288398b4c8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.324054 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-logs" (OuterVolumeSpecName: "logs") pod "8efb99ca-7ccd-466b-a6dc-f4288398b4c8" (UID: "8efb99ca-7ccd-466b-a6dc-f4288398b4c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.328763 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-ceph" (OuterVolumeSpecName: "ceph") pod "8efb99ca-7ccd-466b-a6dc-f4288398b4c8" (UID: "8efb99ca-7ccd-466b-a6dc-f4288398b4c8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.329016 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-scripts" (OuterVolumeSpecName: "scripts") pod "8efb99ca-7ccd-466b-a6dc-f4288398b4c8" (UID: "8efb99ca-7ccd-466b-a6dc-f4288398b4c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.335718 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-kube-api-access-kh5ml" (OuterVolumeSpecName: "kube-api-access-kh5ml") pod "8efb99ca-7ccd-466b-a6dc-f4288398b4c8" (UID: "8efb99ca-7ccd-466b-a6dc-f4288398b4c8"). InnerVolumeSpecName "kube-api-access-kh5ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.356048 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8efb99ca-7ccd-466b-a6dc-f4288398b4c8" (UID: "8efb99ca-7ccd-466b-a6dc-f4288398b4c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.369310 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-config-data" (OuterVolumeSpecName: "config-data") pod "8efb99ca-7ccd-466b-a6dc-f4288398b4c8" (UID: "8efb99ca-7ccd-466b-a6dc-f4288398b4c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.425267 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.425301 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.425313 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.425329 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.425343 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.425356 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh5ml\" (UniqueName: \"kubernetes.io/projected/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-kube-api-access-kh5ml\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.425367 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efb99ca-7ccd-466b-a6dc-f4288398b4c8-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.550981 4909 generic.go:334] "Generic (PLEG): container finished" podID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerID="762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16" exitCode=0 Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.551018 4909 generic.go:334] "Generic (PLEG): container finished" podID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerID="790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9" exitCode=143 Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.551058 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.551068 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8efb99ca-7ccd-466b-a6dc-f4288398b4c8","Type":"ContainerDied","Data":"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16"} Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.551124 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8efb99ca-7ccd-466b-a6dc-f4288398b4c8","Type":"ContainerDied","Data":"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9"} Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.551137 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8efb99ca-7ccd-466b-a6dc-f4288398b4c8","Type":"ContainerDied","Data":"6514fa93caf4bd17aeccd3c7436cacc2373837819bb414869a18c75b6b02213c"} Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.551156 4909 scope.go:117] "RemoveContainer" containerID="762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.576689 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.576722 4909 scope.go:117] "RemoveContainer" containerID="790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.589992 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.605929 4909 scope.go:117] "RemoveContainer" containerID="762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.609442 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:28:03 crc kubenswrapper[4909]: E1126 08:28:02.609914 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-log" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.609929 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-log" Nov 26 08:28:03 crc kubenswrapper[4909]: E1126 08:28:02.609951 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-httpd" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.609959 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-httpd" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.610197 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-httpd" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.610220 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" containerName="glance-log" Nov 26 08:28:03 crc kubenswrapper[4909]: E1126 08:28:02.611306 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16\": container with ID starting with 762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16 not found: ID does not exist" containerID="762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.611361 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16"} err="failed to get container status \"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16\": rpc error: code = NotFound desc = could not find container \"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16\": container with ID starting with 762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16 not found: ID does not exist" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.611386 4909 scope.go:117] "RemoveContainer" containerID="790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.611635 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: E1126 08:28:02.615638 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9\": container with ID starting with 790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9 not found: ID does not exist" containerID="790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.615704 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9"} err="failed to get container status \"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9\": rpc error: code = NotFound desc = could not find container \"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9\": container with ID starting with 790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9 not found: ID does not exist" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.615740 4909 scope.go:117] "RemoveContainer" containerID="762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.616186 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16"} err="failed to get container status \"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16\": rpc error: code = NotFound desc = could not find container \"762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16\": container with ID starting with 762da064001f549165233869ce70eb64e1400f6697c4aa3fe200e644a4953e16 not found: ID does not exist" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.616221 4909 scope.go:117] "RemoveContainer" containerID="790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.616689 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9"} err="failed to get container status \"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9\": rpc error: code = NotFound desc = could not find container \"790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9\": container with ID starting with 790ec10ad6994d7f60d71f48315c3f06515a422f8d7db0e44aeeaf939eae38b9 not found: ID does not exist" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.616846 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.619072 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.732424 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-ceph\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.732463 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-logs\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.732497 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-config-data\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.732518 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.732615 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.732654 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-scripts\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.732673 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v7bj\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-kube-api-access-9v7bj\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.851356 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-scripts\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.851433 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v7bj\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-kube-api-access-9v7bj\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.851467 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-ceph\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.851507 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-logs\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.851601 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-config-data\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.851637 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.852006 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.853169 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.853366 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-logs\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.855229 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-scripts\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.855811 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-ceph\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.857714 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.861349 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-config-data\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:02.901512 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v7bj\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-kube-api-access-9v7bj\") pod \"glance-default-external-api-0\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:03.017246 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:03.554086 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:03.560376 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-log" containerID="cri-o://218cb84eb8cbfe364345fdb69288402a5e80d13fc0785a67395917d49d2a7b7a" gracePeriod=30 Nov 26 08:28:03 crc kubenswrapper[4909]: I1126 08:28:03.560987 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-httpd" containerID="cri-o://2fb053185025aff549854f3f80b21fb1d98bd7034de9e25d0c3a68c74561a071" gracePeriod=30 Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.499069 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:28:04 crc kubenswrapper[4909]: E1126 08:28:04.499827 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.511447 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efb99ca-7ccd-466b-a6dc-f4288398b4c8" path="/var/lib/kubelet/pods/8efb99ca-7ccd-466b-a6dc-f4288398b4c8/volumes" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.584308 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14496a14-234c-4375-afc5-12458c70e15e","Type":"ContainerStarted","Data":"c869c1689a7b93f6ae3d29aea0e22914960ed5f18a2790d81d704c91127bc141"} Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.584685 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14496a14-234c-4375-afc5-12458c70e15e","Type":"ContainerStarted","Data":"3b37ad1402453f083696f6073ecb9ec192db3e9fe7b2e24180ec2d6a2298d070"} Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.586486 4909 generic.go:334] "Generic (PLEG): container finished" podID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerID="2fb053185025aff549854f3f80b21fb1d98bd7034de9e25d0c3a68c74561a071" exitCode=0 Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.586517 4909 generic.go:334] "Generic (PLEG): container finished" podID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerID="218cb84eb8cbfe364345fdb69288402a5e80d13fc0785a67395917d49d2a7b7a" exitCode=143 Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.586523 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67a749e0-4d43-4fde-b60c-7b6272ef33f2","Type":"ContainerDied","Data":"2fb053185025aff549854f3f80b21fb1d98bd7034de9e25d0c3a68c74561a071"} Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.586669 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67a749e0-4d43-4fde-b60c-7b6272ef33f2","Type":"ContainerDied","Data":"218cb84eb8cbfe364345fdb69288402a5e80d13fc0785a67395917d49d2a7b7a"} Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.713156 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.784084 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-logs\") pod \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.784202 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-httpd-run\") pod \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.784268 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-combined-ca-bundle\") pod \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.784316 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-scripts\") pod \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.784634 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-config-data\") pod \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.784724 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz7qp\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-kube-api-access-nz7qp\") pod \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.784814 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-ceph\") pod \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\" (UID: \"67a749e0-4d43-4fde-b60c-7b6272ef33f2\") " Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.785382 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-logs" (OuterVolumeSpecName: "logs") pod "67a749e0-4d43-4fde-b60c-7b6272ef33f2" (UID: "67a749e0-4d43-4fde-b60c-7b6272ef33f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.785582 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67a749e0-4d43-4fde-b60c-7b6272ef33f2" (UID: "67a749e0-4d43-4fde-b60c-7b6272ef33f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.790614 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-kube-api-access-nz7qp" (OuterVolumeSpecName: "kube-api-access-nz7qp") pod "67a749e0-4d43-4fde-b60c-7b6272ef33f2" (UID: "67a749e0-4d43-4fde-b60c-7b6272ef33f2"). InnerVolumeSpecName "kube-api-access-nz7qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.790668 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-ceph" (OuterVolumeSpecName: "ceph") pod "67a749e0-4d43-4fde-b60c-7b6272ef33f2" (UID: "67a749e0-4d43-4fde-b60c-7b6272ef33f2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.790958 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-scripts" (OuterVolumeSpecName: "scripts") pod "67a749e0-4d43-4fde-b60c-7b6272ef33f2" (UID: "67a749e0-4d43-4fde-b60c-7b6272ef33f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.791305 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz7qp\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-kube-api-access-nz7qp\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.791341 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67a749e0-4d43-4fde-b60c-7b6272ef33f2-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.791353 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.791363 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67a749e0-4d43-4fde-b60c-7b6272ef33f2-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.791373 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.812622 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67a749e0-4d43-4fde-b60c-7b6272ef33f2" (UID: "67a749e0-4d43-4fde-b60c-7b6272ef33f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.837725 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-config-data" (OuterVolumeSpecName: "config-data") pod "67a749e0-4d43-4fde-b60c-7b6272ef33f2" (UID: "67a749e0-4d43-4fde-b60c-7b6272ef33f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.892704 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:04 crc kubenswrapper[4909]: I1126 08:28:04.892746 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a749e0-4d43-4fde-b60c-7b6272ef33f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.599229 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67a749e0-4d43-4fde-b60c-7b6272ef33f2","Type":"ContainerDied","Data":"3ba79e02806e1481f59769c28947a7293089ed0ec7b689be6c72d9ecd630647e"} Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.599641 4909 scope.go:117] "RemoveContainer" containerID="2fb053185025aff549854f3f80b21fb1d98bd7034de9e25d0c3a68c74561a071" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.599825 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.603197 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14496a14-234c-4375-afc5-12458c70e15e","Type":"ContainerStarted","Data":"453fb02790be82650f06c2805b40f324ace8ce078769e65b26a7189394db812f"} Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.620141 4909 scope.go:117] "RemoveContainer" containerID="218cb84eb8cbfe364345fdb69288402a5e80d13fc0785a67395917d49d2a7b7a" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.638477 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.63845878 podStartE2EDuration="3.63845878s" podCreationTimestamp="2025-11-26 08:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:05.63258139 +0000 UTC m=+5257.778792566" watchObservedRunningTime="2025-11-26 08:28:05.63845878 +0000 UTC m=+5257.784669956" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.656081 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.663021 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.680257 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:28:05 crc kubenswrapper[4909]: E1126 08:28:05.680841 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-httpd" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.680954 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-httpd" Nov 26 08:28:05 crc kubenswrapper[4909]: E1126 08:28:05.681082 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-log" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.681148 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-log" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.681800 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-httpd" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.681899 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" containerName="glance-log" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.683277 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.687174 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.705916 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.813441 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.813515 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-846r8\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-kube-api-access-846r8\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.813551 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.813569 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.813610 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.813638 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-logs\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.813701 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.915370 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.915440 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-846r8\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-kube-api-access-846r8\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.915484 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.915510 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.915545 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.915582 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-logs\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.915894 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.917088 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.917392 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-logs\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.920809 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.921918 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.922108 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.922677 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:05 crc kubenswrapper[4909]: I1126 08:28:05.933383 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-846r8\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-kube-api-access-846r8\") pod \"glance-default-internal-api-0\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:28:06 crc kubenswrapper[4909]: I1126 08:28:06.005222 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:06 crc kubenswrapper[4909]: I1126 08:28:06.519865 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a749e0-4d43-4fde-b60c-7b6272ef33f2" path="/var/lib/kubelet/pods/67a749e0-4d43-4fde-b60c-7b6272ef33f2/volumes" Nov 26 08:28:06 crc kubenswrapper[4909]: W1126 08:28:06.608049 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7618e381_4984_4367_8cf4_69070b8c6fe5.slice/crio-a9c0ded700f8362378666af19aac85361ccf7784b9a36b5b2a8954ac302aad04 WatchSource:0}: Error finding container a9c0ded700f8362378666af19aac85361ccf7784b9a36b5b2a8954ac302aad04: Status 404 returned error can't find the container with id a9c0ded700f8362378666af19aac85361ccf7784b9a36b5b2a8954ac302aad04 Nov 26 08:28:06 crc kubenswrapper[4909]: I1126 08:28:06.618318 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:28:07 crc kubenswrapper[4909]: I1126 08:28:07.635495 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7618e381-4984-4367-8cf4-69070b8c6fe5","Type":"ContainerStarted","Data":"e1842706d0f176fca62a39b9ac85ca66b8d4aafcd5907fe9f3c3b1f6f4307760"} Nov 26 08:28:07 crc kubenswrapper[4909]: I1126 08:28:07.636078 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7618e381-4984-4367-8cf4-69070b8c6fe5","Type":"ContainerStarted","Data":"6d9a00e7734f3761e327649359f4afc22e17fb8d7b21852dc0ae28238eb666b5"} Nov 26 08:28:07 crc kubenswrapper[4909]: I1126 08:28:07.636088 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7618e381-4984-4367-8cf4-69070b8c6fe5","Type":"ContainerStarted","Data":"a9c0ded700f8362378666af19aac85361ccf7784b9a36b5b2a8954ac302aad04"} Nov 26 08:28:07 crc kubenswrapper[4909]: I1126 08:28:07.663368 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.66335 podStartE2EDuration="2.66335s" podCreationTimestamp="2025-11-26 08:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:07.656112852 +0000 UTC m=+5259.802324018" watchObservedRunningTime="2025-11-26 08:28:07.66335 +0000 UTC m=+5259.809561166" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.646981 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2ggrp"] Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.650857 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.662829 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ggrp"] Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.775175 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-utilities\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.775230 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-catalog-content\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.775286 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pvmk\" (UniqueName: \"kubernetes.io/projected/d3674c51-de0c-4ab7-a26d-9647e349a912-kube-api-access-2pvmk\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.877387 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-utilities\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.877452 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-catalog-content\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.877499 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pvmk\" (UniqueName: \"kubernetes.io/projected/d3674c51-de0c-4ab7-a26d-9647e349a912-kube-api-access-2pvmk\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.878013 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-utilities\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.878355 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-catalog-content\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.896517 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pvmk\" (UniqueName: \"kubernetes.io/projected/d3674c51-de0c-4ab7-a26d-9647e349a912-kube-api-access-2pvmk\") pod \"redhat-operators-2ggrp\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:08 crc kubenswrapper[4909]: I1126 08:28:08.990083 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.248361 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.307351 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94d77d5bf-ct248"] Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.307665 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" podUID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerName="dnsmasq-dns" containerID="cri-o://1425eb56e2a6641f1f2752d67adaf1a84dc696aef0b549cd423bd9a3f8731867" gracePeriod=10 Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.458139 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ggrp"] Nov 26 08:28:09 crc kubenswrapper[4909]: W1126 08:28:09.461586 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3674c51_de0c_4ab7_a26d_9647e349a912.slice/crio-e3ce06be477c35059f54282848ca53f21c84d6231d964fe3db043daba8d02f81 WatchSource:0}: Error finding container e3ce06be477c35059f54282848ca53f21c84d6231d964fe3db043daba8d02f81: Status 404 returned error can't find the container with id e3ce06be477c35059f54282848ca53f21c84d6231d964fe3db043daba8d02f81 Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.664026 4909 generic.go:334] "Generic (PLEG): container finished" podID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerID="1425eb56e2a6641f1f2752d67adaf1a84dc696aef0b549cd423bd9a3f8731867" exitCode=0 Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.664072 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" event={"ID":"7d6894c7-d5b6-422d-b870-bf1116c593a1","Type":"ContainerDied","Data":"1425eb56e2a6641f1f2752d67adaf1a84dc696aef0b549cd423bd9a3f8731867"} Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.666281 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ggrp" event={"ID":"d3674c51-de0c-4ab7-a26d-9647e349a912","Type":"ContainerStarted","Data":"0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187"} Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.666332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ggrp" event={"ID":"d3674c51-de0c-4ab7-a26d-9647e349a912","Type":"ContainerStarted","Data":"e3ce06be477c35059f54282848ca53f21c84d6231d964fe3db043daba8d02f81"} Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.801626 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.902440 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-nb\") pod \"7d6894c7-d5b6-422d-b870-bf1116c593a1\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.902541 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-sb\") pod \"7d6894c7-d5b6-422d-b870-bf1116c593a1\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.902631 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-dns-svc\") pod \"7d6894c7-d5b6-422d-b870-bf1116c593a1\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.902671 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-config\") pod \"7d6894c7-d5b6-422d-b870-bf1116c593a1\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.902703 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp7nm\" (UniqueName: \"kubernetes.io/projected/7d6894c7-d5b6-422d-b870-bf1116c593a1-kube-api-access-zp7nm\") pod \"7d6894c7-d5b6-422d-b870-bf1116c593a1\" (UID: \"7d6894c7-d5b6-422d-b870-bf1116c593a1\") " Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.915883 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6894c7-d5b6-422d-b870-bf1116c593a1-kube-api-access-zp7nm" (OuterVolumeSpecName: "kube-api-access-zp7nm") pod "7d6894c7-d5b6-422d-b870-bf1116c593a1" (UID: "7d6894c7-d5b6-422d-b870-bf1116c593a1"). InnerVolumeSpecName "kube-api-access-zp7nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.955807 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d6894c7-d5b6-422d-b870-bf1116c593a1" (UID: "7d6894c7-d5b6-422d-b870-bf1116c593a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.968072 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-config" (OuterVolumeSpecName: "config") pod "7d6894c7-d5b6-422d-b870-bf1116c593a1" (UID: "7d6894c7-d5b6-422d-b870-bf1116c593a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.981441 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d6894c7-d5b6-422d-b870-bf1116c593a1" (UID: "7d6894c7-d5b6-422d-b870-bf1116c593a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:09 crc kubenswrapper[4909]: I1126 08:28:09.984819 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d6894c7-d5b6-422d-b870-bf1116c593a1" (UID: "7d6894c7-d5b6-422d-b870-bf1116c593a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.004973 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.005006 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.005016 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.005027 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6894c7-d5b6-422d-b870-bf1116c593a1-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.005037 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp7nm\" (UniqueName: \"kubernetes.io/projected/7d6894c7-d5b6-422d-b870-bf1116c593a1-kube-api-access-zp7nm\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.683476 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.683456 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94d77d5bf-ct248" event={"ID":"7d6894c7-d5b6-422d-b870-bf1116c593a1","Type":"ContainerDied","Data":"eb180194e01eeebaf781cd9de3591473dba801b650db3b7848ce3a75142eb8bb"} Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.684147 4909 scope.go:117] "RemoveContainer" containerID="1425eb56e2a6641f1f2752d67adaf1a84dc696aef0b549cd423bd9a3f8731867" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.687418 4909 generic.go:334] "Generic (PLEG): container finished" podID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerID="0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187" exitCode=0 Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.687461 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ggrp" event={"ID":"d3674c51-de0c-4ab7-a26d-9647e349a912","Type":"ContainerDied","Data":"0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187"} Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.728883 4909 scope.go:117] "RemoveContainer" containerID="e24b8699d2ae09e9299c30e2570f787232f609dcb667cd3fe07da58a0e151085" Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.767965 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94d77d5bf-ct248"] Nov 26 08:28:10 crc kubenswrapper[4909]: I1126 08:28:10.784841 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-94d77d5bf-ct248"] Nov 26 08:28:12 crc kubenswrapper[4909]: I1126 08:28:12.510111 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6894c7-d5b6-422d-b870-bf1116c593a1" path="/var/lib/kubelet/pods/7d6894c7-d5b6-422d-b870-bf1116c593a1/volumes" Nov 26 08:28:12 crc kubenswrapper[4909]: I1126 08:28:12.706657 4909 generic.go:334] "Generic (PLEG): container finished" podID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerID="f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f" exitCode=0 Nov 26 08:28:12 crc kubenswrapper[4909]: I1126 08:28:12.706723 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ggrp" event={"ID":"d3674c51-de0c-4ab7-a26d-9647e349a912","Type":"ContainerDied","Data":"f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f"} Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.017851 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.018288 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.068788 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.094812 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.720484 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ggrp" event={"ID":"d3674c51-de0c-4ab7-a26d-9647e349a912","Type":"ContainerStarted","Data":"80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853"} Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.721168 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.721235 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 08:28:13 crc kubenswrapper[4909]: I1126 08:28:13.752637 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2ggrp" podStartSLOduration=3.117259969 podStartE2EDuration="5.752585815s" podCreationTimestamp="2025-11-26 08:28:08 +0000 UTC" firstStartedPulling="2025-11-26 08:28:10.690996619 +0000 UTC m=+5262.837207825" lastFinishedPulling="2025-11-26 08:28:13.326322465 +0000 UTC m=+5265.472533671" observedRunningTime="2025-11-26 08:28:13.745556442 +0000 UTC m=+5265.891767608" watchObservedRunningTime="2025-11-26 08:28:13.752585815 +0000 UTC m=+5265.898796981" Nov 26 08:28:15 crc kubenswrapper[4909]: I1126 08:28:15.693828 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 08:28:15 crc kubenswrapper[4909]: I1126 08:28:15.702565 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.007245 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.007497 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.046063 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.055101 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.498622 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.788275 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"e5a2ca2aa716ec654cdedcf8d1dd83703540811f543055e7744242b9dcfda8f7"} Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.789749 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:16 crc kubenswrapper[4909]: I1126 08:28:16.789802 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:18 crc kubenswrapper[4909]: I1126 08:28:18.776195 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:18 crc kubenswrapper[4909]: I1126 08:28:18.808430 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 08:28:18 crc kubenswrapper[4909]: I1126 08:28:18.927865 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 08:28:18 crc kubenswrapper[4909]: I1126 08:28:18.991609 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:18 crc kubenswrapper[4909]: I1126 08:28:18.991652 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:20 crc kubenswrapper[4909]: I1126 08:28:20.049928 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2ggrp" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="registry-server" probeResult="failure" output=< Nov 26 08:28:20 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 08:28:20 crc kubenswrapper[4909]: > Nov 26 08:28:24 crc kubenswrapper[4909]: I1126 08:28:24.894495 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tgcsh"] Nov 26 08:28:24 crc kubenswrapper[4909]: E1126 08:28:24.896057 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerName="dnsmasq-dns" Nov 26 08:28:24 crc kubenswrapper[4909]: I1126 08:28:24.896076 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerName="dnsmasq-dns" Nov 26 08:28:24 crc kubenswrapper[4909]: E1126 08:28:24.896102 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerName="init" Nov 26 08:28:24 crc kubenswrapper[4909]: I1126 08:28:24.896110 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerName="init" Nov 26 08:28:24 crc kubenswrapper[4909]: I1126 08:28:24.896384 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6894c7-d5b6-422d-b870-bf1116c593a1" containerName="dnsmasq-dns" Nov 26 08:28:24 crc kubenswrapper[4909]: I1126 08:28:24.898840 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:24 crc kubenswrapper[4909]: I1126 08:28:24.918317 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgcsh"] Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.000450 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-catalog-content\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.000526 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-utilities\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.000889 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2nw7\" (UniqueName: \"kubernetes.io/projected/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-kube-api-access-w2nw7\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.102910 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2nw7\" (UniqueName: \"kubernetes.io/projected/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-kube-api-access-w2nw7\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.103007 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-catalog-content\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.103073 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-utilities\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.103567 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-catalog-content\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.103629 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-utilities\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.126405 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2nw7\" (UniqueName: \"kubernetes.io/projected/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-kube-api-access-w2nw7\") pod \"certified-operators-tgcsh\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.231371 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.731575 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgcsh"] Nov 26 08:28:25 crc kubenswrapper[4909]: I1126 08:28:25.902172 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgcsh" event={"ID":"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0","Type":"ContainerStarted","Data":"8375f3275f65a93d60bdc24fb1b4c8528b9db85fad387752a5f1491a6804076a"} Nov 26 08:28:26 crc kubenswrapper[4909]: I1126 08:28:26.828760 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-42pmh"] Nov 26 08:28:26 crc kubenswrapper[4909]: I1126 08:28:26.830305 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-42pmh" Nov 26 08:28:26 crc kubenswrapper[4909]: I1126 08:28:26.848724 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-42pmh"] Nov 26 08:28:26 crc kubenswrapper[4909]: I1126 08:28:26.911462 4909 generic.go:334] "Generic (PLEG): container finished" podID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerID="c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35" exitCode=0 Nov 26 08:28:26 crc kubenswrapper[4909]: I1126 08:28:26.911504 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgcsh" event={"ID":"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0","Type":"ContainerDied","Data":"c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35"} Nov 26 08:28:26 crc kubenswrapper[4909]: I1126 08:28:26.932402 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29gbf\" (UniqueName: \"kubernetes.io/projected/20351e45-309d-4c1d-8dda-6b18add05075-kube-api-access-29gbf\") pod \"placement-db-create-42pmh\" (UID: \"20351e45-309d-4c1d-8dda-6b18add05075\") " pod="openstack/placement-db-create-42pmh" Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.033926 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29gbf\" (UniqueName: \"kubernetes.io/projected/20351e45-309d-4c1d-8dda-6b18add05075-kube-api-access-29gbf\") pod \"placement-db-create-42pmh\" (UID: \"20351e45-309d-4c1d-8dda-6b18add05075\") " pod="openstack/placement-db-create-42pmh" Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.052086 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29gbf\" (UniqueName: \"kubernetes.io/projected/20351e45-309d-4c1d-8dda-6b18add05075-kube-api-access-29gbf\") pod \"placement-db-create-42pmh\" (UID: \"20351e45-309d-4c1d-8dda-6b18add05075\") " pod="openstack/placement-db-create-42pmh" Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.150082 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-42pmh" Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.654799 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-42pmh"] Nov 26 08:28:27 crc kubenswrapper[4909]: W1126 08:28:27.660274 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20351e45_309d_4c1d_8dda_6b18add05075.slice/crio-a3f2f5c0c112d85e32bdf87e641d7bf047e53248f3799f32280aab02d7d3e11c WatchSource:0}: Error finding container a3f2f5c0c112d85e32bdf87e641d7bf047e53248f3799f32280aab02d7d3e11c: Status 404 returned error can't find the container with id a3f2f5c0c112d85e32bdf87e641d7bf047e53248f3799f32280aab02d7d3e11c Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.924985 4909 generic.go:334] "Generic (PLEG): container finished" podID="20351e45-309d-4c1d-8dda-6b18add05075" containerID="6e6d19ceba050ea62d01d987a8b92bedd8dd8ab21c1fa542e5bb0fc0b14fb9fc" exitCode=0 Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.925095 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-42pmh" event={"ID":"20351e45-309d-4c1d-8dda-6b18add05075","Type":"ContainerDied","Data":"6e6d19ceba050ea62d01d987a8b92bedd8dd8ab21c1fa542e5bb0fc0b14fb9fc"} Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.925466 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-42pmh" event={"ID":"20351e45-309d-4c1d-8dda-6b18add05075","Type":"ContainerStarted","Data":"a3f2f5c0c112d85e32bdf87e641d7bf047e53248f3799f32280aab02d7d3e11c"} Nov 26 08:28:27 crc kubenswrapper[4909]: I1126 08:28:27.929071 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgcsh" event={"ID":"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0","Type":"ContainerStarted","Data":"5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80"} Nov 26 08:28:28 crc kubenswrapper[4909]: I1126 08:28:28.938821 4909 generic.go:334] "Generic (PLEG): container finished" podID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerID="5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80" exitCode=0 Nov 26 08:28:28 crc kubenswrapper[4909]: I1126 08:28:28.938930 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgcsh" event={"ID":"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0","Type":"ContainerDied","Data":"5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80"} Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.037222 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.083779 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.263892 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-42pmh" Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.381773 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29gbf\" (UniqueName: \"kubernetes.io/projected/20351e45-309d-4c1d-8dda-6b18add05075-kube-api-access-29gbf\") pod \"20351e45-309d-4c1d-8dda-6b18add05075\" (UID: \"20351e45-309d-4c1d-8dda-6b18add05075\") " Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.389889 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20351e45-309d-4c1d-8dda-6b18add05075-kube-api-access-29gbf" (OuterVolumeSpecName: "kube-api-access-29gbf") pod "20351e45-309d-4c1d-8dda-6b18add05075" (UID: "20351e45-309d-4c1d-8dda-6b18add05075"). InnerVolumeSpecName "kube-api-access-29gbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.485721 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29gbf\" (UniqueName: \"kubernetes.io/projected/20351e45-309d-4c1d-8dda-6b18add05075-kube-api-access-29gbf\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.953416 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-42pmh" event={"ID":"20351e45-309d-4c1d-8dda-6b18add05075","Type":"ContainerDied","Data":"a3f2f5c0c112d85e32bdf87e641d7bf047e53248f3799f32280aab02d7d3e11c"} Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.953457 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f2f5c0c112d85e32bdf87e641d7bf047e53248f3799f32280aab02d7d3e11c" Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.953434 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-42pmh" Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.956943 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgcsh" event={"ID":"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0","Type":"ContainerStarted","Data":"70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1"} Nov 26 08:28:29 crc kubenswrapper[4909]: I1126 08:28:29.981066 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tgcsh" podStartSLOduration=3.49146046 podStartE2EDuration="5.981042157s" podCreationTimestamp="2025-11-26 08:28:24 +0000 UTC" firstStartedPulling="2025-11-26 08:28:26.913336584 +0000 UTC m=+5279.059547750" lastFinishedPulling="2025-11-26 08:28:29.402918281 +0000 UTC m=+5281.549129447" observedRunningTime="2025-11-26 08:28:29.974022195 +0000 UTC m=+5282.120233371" watchObservedRunningTime="2025-11-26 08:28:29.981042157 +0000 UTC m=+5282.127253333" Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.444066 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ggrp"] Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.444678 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2ggrp" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="registry-server" containerID="cri-o://80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853" gracePeriod=2 Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.932366 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.979276 4909 generic.go:334] "Generic (PLEG): container finished" podID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerID="80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853" exitCode=0 Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.979324 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ggrp" event={"ID":"d3674c51-de0c-4ab7-a26d-9647e349a912","Type":"ContainerDied","Data":"80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853"} Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.979337 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ggrp" Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.979354 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ggrp" event={"ID":"d3674c51-de0c-4ab7-a26d-9647e349a912","Type":"ContainerDied","Data":"e3ce06be477c35059f54282848ca53f21c84d6231d964fe3db043daba8d02f81"} Nov 26 08:28:31 crc kubenswrapper[4909]: I1126 08:28:31.979378 4909 scope.go:117] "RemoveContainer" containerID="80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.006877 4909 scope.go:117] "RemoveContainer" containerID="f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.032710 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-utilities\") pod \"d3674c51-de0c-4ab7-a26d-9647e349a912\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.032775 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pvmk\" (UniqueName: \"kubernetes.io/projected/d3674c51-de0c-4ab7-a26d-9647e349a912-kube-api-access-2pvmk\") pod \"d3674c51-de0c-4ab7-a26d-9647e349a912\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.032975 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-catalog-content\") pod \"d3674c51-de0c-4ab7-a26d-9647e349a912\" (UID: \"d3674c51-de0c-4ab7-a26d-9647e349a912\") " Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.034082 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-utilities" (OuterVolumeSpecName: "utilities") pod "d3674c51-de0c-4ab7-a26d-9647e349a912" (UID: "d3674c51-de0c-4ab7-a26d-9647e349a912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.038532 4909 scope.go:117] "RemoveContainer" containerID="0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.039423 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3674c51-de0c-4ab7-a26d-9647e349a912-kube-api-access-2pvmk" (OuterVolumeSpecName: "kube-api-access-2pvmk") pod "d3674c51-de0c-4ab7-a26d-9647e349a912" (UID: "d3674c51-de0c-4ab7-a26d-9647e349a912"). InnerVolumeSpecName "kube-api-access-2pvmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.127074 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3674c51-de0c-4ab7-a26d-9647e349a912" (UID: "d3674c51-de0c-4ab7-a26d-9647e349a912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.134921 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.134962 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3674c51-de0c-4ab7-a26d-9647e349a912-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.134976 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pvmk\" (UniqueName: \"kubernetes.io/projected/d3674c51-de0c-4ab7-a26d-9647e349a912-kube-api-access-2pvmk\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.143244 4909 scope.go:117] "RemoveContainer" containerID="80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853" Nov 26 08:28:32 crc kubenswrapper[4909]: E1126 08:28:32.143773 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853\": container with ID starting with 80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853 not found: ID does not exist" containerID="80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.143801 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853"} err="failed to get container status \"80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853\": rpc error: code = NotFound desc = could not find container \"80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853\": container with ID starting with 80050c0d4d1bce365dd906c921bace79f7b0991a13a6adde14fd76634b4d4853 not found: ID does not exist" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.143819 4909 scope.go:117] "RemoveContainer" containerID="f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f" Nov 26 08:28:32 crc kubenswrapper[4909]: E1126 08:28:32.144726 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f\": container with ID starting with f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f not found: ID does not exist" containerID="f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.144753 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f"} err="failed to get container status \"f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f\": rpc error: code = NotFound desc = could not find container \"f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f\": container with ID starting with f43ae26829c23fee00445ff686602b0f48f6e57255c27cf9db849555932a650f not found: ID does not exist" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.144772 4909 scope.go:117] "RemoveContainer" containerID="0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187" Nov 26 08:28:32 crc kubenswrapper[4909]: E1126 08:28:32.145169 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187\": container with ID starting with 0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187 not found: ID does not exist" containerID="0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.145232 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187"} err="failed to get container status \"0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187\": rpc error: code = NotFound desc = could not find container \"0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187\": container with ID starting with 0489f04239f945a8ed2584fde1d0cec0153c61dc7a739aa688b1ccc023871187 not found: ID does not exist" Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.322305 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ggrp"] Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.334451 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2ggrp"] Nov 26 08:28:32 crc kubenswrapper[4909]: I1126 08:28:32.512872 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" path="/var/lib/kubelet/pods/d3674c51-de0c-4ab7-a26d-9647e349a912/volumes" Nov 26 08:28:35 crc kubenswrapper[4909]: I1126 08:28:35.232139 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:35 crc kubenswrapper[4909]: I1126 08:28:35.232482 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:35 crc kubenswrapper[4909]: I1126 08:28:35.282254 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.102218 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.444365 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgcsh"] Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.856531 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d8fd-account-create-2rknr"] Nov 26 08:28:36 crc kubenswrapper[4909]: E1126 08:28:36.856911 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20351e45-309d-4c1d-8dda-6b18add05075" containerName="mariadb-database-create" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.856930 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="20351e45-309d-4c1d-8dda-6b18add05075" containerName="mariadb-database-create" Nov 26 08:28:36 crc kubenswrapper[4909]: E1126 08:28:36.856950 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="extract-utilities" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.856956 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="extract-utilities" Nov 26 08:28:36 crc kubenswrapper[4909]: E1126 08:28:36.856968 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="registry-server" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.856975 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="registry-server" Nov 26 08:28:36 crc kubenswrapper[4909]: E1126 08:28:36.857001 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="extract-content" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.857007 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="extract-content" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.857164 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3674c51-de0c-4ab7-a26d-9647e349a912" containerName="registry-server" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.857193 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="20351e45-309d-4c1d-8dda-6b18add05075" containerName="mariadb-database-create" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.857832 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fd-account-create-2rknr" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.860279 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 26 08:28:36 crc kubenswrapper[4909]: I1126 08:28:36.871138 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d8fd-account-create-2rknr"] Nov 26 08:28:37 crc kubenswrapper[4909]: I1126 08:28:37.037811 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd5ms\" (UniqueName: \"kubernetes.io/projected/27055241-fb7f-439f-a0ce-54243ce3d2eb-kube-api-access-nd5ms\") pod \"placement-d8fd-account-create-2rknr\" (UID: \"27055241-fb7f-439f-a0ce-54243ce3d2eb\") " pod="openstack/placement-d8fd-account-create-2rknr" Nov 26 08:28:37 crc kubenswrapper[4909]: I1126 08:28:37.139511 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd5ms\" (UniqueName: \"kubernetes.io/projected/27055241-fb7f-439f-a0ce-54243ce3d2eb-kube-api-access-nd5ms\") pod \"placement-d8fd-account-create-2rknr\" (UID: \"27055241-fb7f-439f-a0ce-54243ce3d2eb\") " pod="openstack/placement-d8fd-account-create-2rknr" Nov 26 08:28:37 crc kubenswrapper[4909]: I1126 08:28:37.174455 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd5ms\" (UniqueName: \"kubernetes.io/projected/27055241-fb7f-439f-a0ce-54243ce3d2eb-kube-api-access-nd5ms\") pod \"placement-d8fd-account-create-2rknr\" (UID: \"27055241-fb7f-439f-a0ce-54243ce3d2eb\") " pod="openstack/placement-d8fd-account-create-2rknr" Nov 26 08:28:37 crc kubenswrapper[4909]: I1126 08:28:37.184896 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fd-account-create-2rknr" Nov 26 08:28:37 crc kubenswrapper[4909]: I1126 08:28:37.514684 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d8fd-account-create-2rknr"] Nov 26 08:28:37 crc kubenswrapper[4909]: I1126 08:28:37.903174 4909 scope.go:117] "RemoveContainer" containerID="afaed7b92dbe727c2bb753f9d5e6c331b3f8acd71e706d4bfd4dedcb2efec108" Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.052780 4909 generic.go:334] "Generic (PLEG): container finished" podID="27055241-fb7f-439f-a0ce-54243ce3d2eb" containerID="d9bb9fe67506888b513f123dcd69894e6de7804ee8c76e8fe3fb5e595b0cb069" exitCode=0 Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.053044 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tgcsh" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="registry-server" containerID="cri-o://70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1" gracePeriod=2 Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.053436 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d8fd-account-create-2rknr" event={"ID":"27055241-fb7f-439f-a0ce-54243ce3d2eb","Type":"ContainerDied","Data":"d9bb9fe67506888b513f123dcd69894e6de7804ee8c76e8fe3fb5e595b0cb069"} Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.053479 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d8fd-account-create-2rknr" event={"ID":"27055241-fb7f-439f-a0ce-54243ce3d2eb","Type":"ContainerStarted","Data":"4a52260e47b9be05f92406e7d3d27c4a6463f4845a44b1cffa8f6d4735270d70"} Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.512208 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.672940 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2nw7\" (UniqueName: \"kubernetes.io/projected/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-kube-api-access-w2nw7\") pod \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.673051 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-utilities\") pod \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.673076 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-catalog-content\") pod \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\" (UID: \"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0\") " Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.674794 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-utilities" (OuterVolumeSpecName: "utilities") pod "4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" (UID: "4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.679393 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-kube-api-access-w2nw7" (OuterVolumeSpecName: "kube-api-access-w2nw7") pod "4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" (UID: "4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0"). InnerVolumeSpecName "kube-api-access-w2nw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.735835 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" (UID: "4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.775421 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2nw7\" (UniqueName: \"kubernetes.io/projected/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-kube-api-access-w2nw7\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.775501 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:38 crc kubenswrapper[4909]: I1126 08:28:38.775524 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.077688 4909 generic.go:334] "Generic (PLEG): container finished" podID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerID="70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1" exitCode=0 Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.078113 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgcsh" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.078725 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgcsh" event={"ID":"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0","Type":"ContainerDied","Data":"70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1"} Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.078814 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgcsh" event={"ID":"4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0","Type":"ContainerDied","Data":"8375f3275f65a93d60bdc24fb1b4c8528b9db85fad387752a5f1491a6804076a"} Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.078854 4909 scope.go:117] "RemoveContainer" containerID="70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.124632 4909 scope.go:117] "RemoveContainer" containerID="5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.138509 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgcsh"] Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.151586 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tgcsh"] Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.160280 4909 scope.go:117] "RemoveContainer" containerID="c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.223812 4909 scope.go:117] "RemoveContainer" containerID="70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1" Nov 26 08:28:39 crc kubenswrapper[4909]: E1126 08:28:39.224291 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1\": container with ID starting with 70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1 not found: ID does not exist" containerID="70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.224334 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1"} err="failed to get container status \"70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1\": rpc error: code = NotFound desc = could not find container \"70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1\": container with ID starting with 70350064e1f1aecaf1091e561168c9d8b9c95302d305a7e9da8bcbc9a84bbfb1 not found: ID does not exist" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.224361 4909 scope.go:117] "RemoveContainer" containerID="5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80" Nov 26 08:28:39 crc kubenswrapper[4909]: E1126 08:28:39.224752 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80\": container with ID starting with 5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80 not found: ID does not exist" containerID="5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.224789 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80"} err="failed to get container status \"5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80\": rpc error: code = NotFound desc = could not find container \"5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80\": container with ID starting with 5de4a1887e7bd0ace69d0ba1b05c772c89c0adea1a95bed10c0e464d7bb39e80 not found: ID does not exist" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.224855 4909 scope.go:117] "RemoveContainer" containerID="c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35" Nov 26 08:28:39 crc kubenswrapper[4909]: E1126 08:28:39.225144 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35\": container with ID starting with c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35 not found: ID does not exist" containerID="c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.225171 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35"} err="failed to get container status \"c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35\": rpc error: code = NotFound desc = could not find container \"c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35\": container with ID starting with c57dbd05208876fa658d07e5d54204b3716dd91800e75f2fc2e55d3d61779d35 not found: ID does not exist" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.488856 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fd-account-create-2rknr" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.590776 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd5ms\" (UniqueName: \"kubernetes.io/projected/27055241-fb7f-439f-a0ce-54243ce3d2eb-kube-api-access-nd5ms\") pod \"27055241-fb7f-439f-a0ce-54243ce3d2eb\" (UID: \"27055241-fb7f-439f-a0ce-54243ce3d2eb\") " Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.595996 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27055241-fb7f-439f-a0ce-54243ce3d2eb-kube-api-access-nd5ms" (OuterVolumeSpecName: "kube-api-access-nd5ms") pod "27055241-fb7f-439f-a0ce-54243ce3d2eb" (UID: "27055241-fb7f-439f-a0ce-54243ce3d2eb"). InnerVolumeSpecName "kube-api-access-nd5ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:39 crc kubenswrapper[4909]: I1126 08:28:39.693744 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd5ms\" (UniqueName: \"kubernetes.io/projected/27055241-fb7f-439f-a0ce-54243ce3d2eb-kube-api-access-nd5ms\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:40 crc kubenswrapper[4909]: I1126 08:28:40.094738 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d8fd-account-create-2rknr" event={"ID":"27055241-fb7f-439f-a0ce-54243ce3d2eb","Type":"ContainerDied","Data":"4a52260e47b9be05f92406e7d3d27c4a6463f4845a44b1cffa8f6d4735270d70"} Nov 26 08:28:40 crc kubenswrapper[4909]: I1126 08:28:40.094788 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a52260e47b9be05f92406e7d3d27c4a6463f4845a44b1cffa8f6d4735270d70" Nov 26 08:28:40 crc kubenswrapper[4909]: I1126 08:28:40.094802 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fd-account-create-2rknr" Nov 26 08:28:40 crc kubenswrapper[4909]: I1126 08:28:40.518208 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" path="/var/lib/kubelet/pods/4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0/volumes" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.165662 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85c649d7bf-5njdx"] Nov 26 08:28:42 crc kubenswrapper[4909]: E1126 08:28:42.166063 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="registry-server" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.166075 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="registry-server" Nov 26 08:28:42 crc kubenswrapper[4909]: E1126 08:28:42.166098 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27055241-fb7f-439f-a0ce-54243ce3d2eb" containerName="mariadb-account-create" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.166104 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="27055241-fb7f-439f-a0ce-54243ce3d2eb" containerName="mariadb-account-create" Nov 26 08:28:42 crc kubenswrapper[4909]: E1126 08:28:42.166116 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="extract-content" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.166123 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="extract-content" Nov 26 08:28:42 crc kubenswrapper[4909]: E1126 08:28:42.166143 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="extract-utilities" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.166149 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="extract-utilities" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.166322 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="27055241-fb7f-439f-a0ce-54243ce3d2eb" containerName="mariadb-account-create" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.166332 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1f05f1-3ef8-418c-a3f2-be6e5e11fba0" containerName="registry-server" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.167286 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.177680 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85c649d7bf-5njdx"] Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.228759 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jwjcz"] Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.246581 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhs2b\" (UniqueName: \"kubernetes.io/projected/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-kube-api-access-dhs2b\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.246699 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-dns-svc\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.246832 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-nb\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.246861 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-config\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.246913 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-sb\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.248376 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jwjcz"] Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.248605 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.253536 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9xwf7" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.254092 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.254363 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.348743 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-config-data\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.348829 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-combined-ca-bundle\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.348898 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-nb\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.348925 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-config\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.348959 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrl9x\" (UniqueName: \"kubernetes.io/projected/72338267-9d00-4d71-b0fa-f1e2e5c42397-kube-api-access-hrl9x\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.348982 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72338267-9d00-4d71-b0fa-f1e2e5c42397-logs\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.349030 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-sb\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.349063 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhs2b\" (UniqueName: \"kubernetes.io/projected/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-kube-api-access-dhs2b\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.349122 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-dns-svc\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.349163 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-scripts\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.350283 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-nb\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.350525 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-dns-svc\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.350664 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-sb\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.350774 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-config\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.371370 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhs2b\" (UniqueName: \"kubernetes.io/projected/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-kube-api-access-dhs2b\") pod \"dnsmasq-dns-85c649d7bf-5njdx\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.450673 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrl9x\" (UniqueName: \"kubernetes.io/projected/72338267-9d00-4d71-b0fa-f1e2e5c42397-kube-api-access-hrl9x\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.450969 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72338267-9d00-4d71-b0fa-f1e2e5c42397-logs\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.451144 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-scripts\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.451293 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-config-data\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.451386 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-combined-ca-bundle\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.451631 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72338267-9d00-4d71-b0fa-f1e2e5c42397-logs\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.454791 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-scripts\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.455240 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-config-data\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.455483 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-combined-ca-bundle\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.468000 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrl9x\" (UniqueName: \"kubernetes.io/projected/72338267-9d00-4d71-b0fa-f1e2e5c42397-kube-api-access-hrl9x\") pod \"placement-db-sync-jwjcz\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.498675 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.576245 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:42 crc kubenswrapper[4909]: I1126 08:28:42.818793 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85c649d7bf-5njdx"] Nov 26 08:28:43 crc kubenswrapper[4909]: I1126 08:28:43.082144 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jwjcz"] Nov 26 08:28:43 crc kubenswrapper[4909]: W1126 08:28:43.085093 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72338267_9d00_4d71_b0fa_f1e2e5c42397.slice/crio-027e4a217efaaabf67fb069bb576a2c3701c82b478ff977410b75d58568bd1f7 WatchSource:0}: Error finding container 027e4a217efaaabf67fb069bb576a2c3701c82b478ff977410b75d58568bd1f7: Status 404 returned error can't find the container with id 027e4a217efaaabf67fb069bb576a2c3701c82b478ff977410b75d58568bd1f7 Nov 26 08:28:43 crc kubenswrapper[4909]: I1126 08:28:43.131827 4909 generic.go:334] "Generic (PLEG): container finished" podID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerID="f690d461f3f183835aac50b2a950d27a748c6b262124aa9cd74f786162de8468" exitCode=0 Nov 26 08:28:43 crc kubenswrapper[4909]: I1126 08:28:43.131919 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" event={"ID":"fd94cb09-3acf-4e20-9d66-060b1a2b17d4","Type":"ContainerDied","Data":"f690d461f3f183835aac50b2a950d27a748c6b262124aa9cd74f786162de8468"} Nov 26 08:28:43 crc kubenswrapper[4909]: I1126 08:28:43.131964 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" event={"ID":"fd94cb09-3acf-4e20-9d66-060b1a2b17d4","Type":"ContainerStarted","Data":"0826497c84c2c5e877b1870593ba0ed29adf3abd761e4ebd1365b443d9857465"} Nov 26 08:28:43 crc kubenswrapper[4909]: I1126 08:28:43.134110 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwjcz" event={"ID":"72338267-9d00-4d71-b0fa-f1e2e5c42397","Type":"ContainerStarted","Data":"027e4a217efaaabf67fb069bb576a2c3701c82b478ff977410b75d58568bd1f7"} Nov 26 08:28:44 crc kubenswrapper[4909]: I1126 08:28:44.144032 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" event={"ID":"fd94cb09-3acf-4e20-9d66-060b1a2b17d4","Type":"ContainerStarted","Data":"4274b0b75b1b87cf9aa5e6c486577bba9331ccfd3c40f7ecc98529a60ef0656c"} Nov 26 08:28:44 crc kubenswrapper[4909]: I1126 08:28:44.145748 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:44 crc kubenswrapper[4909]: I1126 08:28:44.147683 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwjcz" event={"ID":"72338267-9d00-4d71-b0fa-f1e2e5c42397","Type":"ContainerStarted","Data":"32f44e64e05d4f0e24753c975c3bcff90168387f26b6a0adaa13cc38b63288a2"} Nov 26 08:28:44 crc kubenswrapper[4909]: I1126 08:28:44.165698 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" podStartSLOduration=2.165685164 podStartE2EDuration="2.165685164s" podCreationTimestamp="2025-11-26 08:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:44.164230724 +0000 UTC m=+5296.310441890" watchObservedRunningTime="2025-11-26 08:28:44.165685164 +0000 UTC m=+5296.311896330" Nov 26 08:28:44 crc kubenswrapper[4909]: I1126 08:28:44.203562 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jwjcz" podStartSLOduration=2.203531847 podStartE2EDuration="2.203531847s" podCreationTimestamp="2025-11-26 08:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:44.194465699 +0000 UTC m=+5296.340676885" watchObservedRunningTime="2025-11-26 08:28:44.203531847 +0000 UTC m=+5296.349743043" Nov 26 08:28:45 crc kubenswrapper[4909]: I1126 08:28:45.165025 4909 generic.go:334] "Generic (PLEG): container finished" podID="72338267-9d00-4d71-b0fa-f1e2e5c42397" containerID="32f44e64e05d4f0e24753c975c3bcff90168387f26b6a0adaa13cc38b63288a2" exitCode=0 Nov 26 08:28:45 crc kubenswrapper[4909]: I1126 08:28:45.165168 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwjcz" event={"ID":"72338267-9d00-4d71-b0fa-f1e2e5c42397","Type":"ContainerDied","Data":"32f44e64e05d4f0e24753c975c3bcff90168387f26b6a0adaa13cc38b63288a2"} Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.577439 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.672297 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-scripts\") pod \"72338267-9d00-4d71-b0fa-f1e2e5c42397\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.672358 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72338267-9d00-4d71-b0fa-f1e2e5c42397-logs\") pod \"72338267-9d00-4d71-b0fa-f1e2e5c42397\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.672425 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-combined-ca-bundle\") pod \"72338267-9d00-4d71-b0fa-f1e2e5c42397\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.672516 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-config-data\") pod \"72338267-9d00-4d71-b0fa-f1e2e5c42397\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.672549 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrl9x\" (UniqueName: \"kubernetes.io/projected/72338267-9d00-4d71-b0fa-f1e2e5c42397-kube-api-access-hrl9x\") pod \"72338267-9d00-4d71-b0fa-f1e2e5c42397\" (UID: \"72338267-9d00-4d71-b0fa-f1e2e5c42397\") " Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.672764 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72338267-9d00-4d71-b0fa-f1e2e5c42397-logs" (OuterVolumeSpecName: "logs") pod "72338267-9d00-4d71-b0fa-f1e2e5c42397" (UID: "72338267-9d00-4d71-b0fa-f1e2e5c42397"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.673315 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72338267-9d00-4d71-b0fa-f1e2e5c42397-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.677781 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72338267-9d00-4d71-b0fa-f1e2e5c42397-kube-api-access-hrl9x" (OuterVolumeSpecName: "kube-api-access-hrl9x") pod "72338267-9d00-4d71-b0fa-f1e2e5c42397" (UID: "72338267-9d00-4d71-b0fa-f1e2e5c42397"). InnerVolumeSpecName "kube-api-access-hrl9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.678121 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-scripts" (OuterVolumeSpecName: "scripts") pod "72338267-9d00-4d71-b0fa-f1e2e5c42397" (UID: "72338267-9d00-4d71-b0fa-f1e2e5c42397"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.707197 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72338267-9d00-4d71-b0fa-f1e2e5c42397" (UID: "72338267-9d00-4d71-b0fa-f1e2e5c42397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.708121 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-config-data" (OuterVolumeSpecName: "config-data") pod "72338267-9d00-4d71-b0fa-f1e2e5c42397" (UID: "72338267-9d00-4d71-b0fa-f1e2e5c42397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.775411 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.775447 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.775460 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72338267-9d00-4d71-b0fa-f1e2e5c42397-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:46 crc kubenswrapper[4909]: I1126 08:28:46.775470 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrl9x\" (UniqueName: \"kubernetes.io/projected/72338267-9d00-4d71-b0fa-f1e2e5c42397-kube-api-access-hrl9x\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.188422 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jwjcz" event={"ID":"72338267-9d00-4d71-b0fa-f1e2e5c42397","Type":"ContainerDied","Data":"027e4a217efaaabf67fb069bb576a2c3701c82b478ff977410b75d58568bd1f7"} Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.188486 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027e4a217efaaabf67fb069bb576a2c3701c82b478ff977410b75d58568bd1f7" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.188651 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jwjcz" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.699661 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f4957bb88-wqqml"] Nov 26 08:28:47 crc kubenswrapper[4909]: E1126 08:28:47.700516 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72338267-9d00-4d71-b0fa-f1e2e5c42397" containerName="placement-db-sync" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.700534 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="72338267-9d00-4d71-b0fa-f1e2e5c42397" containerName="placement-db-sync" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.700783 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="72338267-9d00-4d71-b0fa-f1e2e5c42397" containerName="placement-db-sync" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.702006 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.705653 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.705800 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9xwf7" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.705977 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.714232 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f4957bb88-wqqml"] Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.895026 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-combined-ca-bundle\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.895100 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-config-data\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.895213 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pq8\" (UniqueName: \"kubernetes.io/projected/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-kube-api-access-v2pq8\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.895288 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-logs\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.895380 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-scripts\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.996796 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-combined-ca-bundle\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.996863 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-config-data\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.996901 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pq8\" (UniqueName: \"kubernetes.io/projected/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-kube-api-access-v2pq8\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.996931 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-logs\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.996964 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-scripts\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:47 crc kubenswrapper[4909]: I1126 08:28:47.997514 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-logs\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:48 crc kubenswrapper[4909]: I1126 08:28:48.004516 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-config-data\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:48 crc kubenswrapper[4909]: I1126 08:28:48.013913 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-scripts\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:48 crc kubenswrapper[4909]: I1126 08:28:48.014554 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-combined-ca-bundle\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:48 crc kubenswrapper[4909]: I1126 08:28:48.057499 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pq8\" (UniqueName: \"kubernetes.io/projected/37b7c6b6-3229-4e8f-b403-8a57c3249e1e-kube-api-access-v2pq8\") pod \"placement-7f4957bb88-wqqml\" (UID: \"37b7c6b6-3229-4e8f-b403-8a57c3249e1e\") " pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:48 crc kubenswrapper[4909]: I1126 08:28:48.333864 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:48 crc kubenswrapper[4909]: I1126 08:28:48.855283 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f4957bb88-wqqml"] Nov 26 08:28:49 crc kubenswrapper[4909]: I1126 08:28:49.210163 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4957bb88-wqqml" event={"ID":"37b7c6b6-3229-4e8f-b403-8a57c3249e1e","Type":"ContainerStarted","Data":"4ed28f84b4a50c2b969b849e4a90786465186189104f82f2465049866bdd8a3a"} Nov 26 08:28:49 crc kubenswrapper[4909]: I1126 08:28:49.210222 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4957bb88-wqqml" event={"ID":"37b7c6b6-3229-4e8f-b403-8a57c3249e1e","Type":"ContainerStarted","Data":"e183af28469cba36db7ee31f74d57501bc7593beb1e86338d7a81a40f97b3a59"} Nov 26 08:28:50 crc kubenswrapper[4909]: I1126 08:28:50.228380 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4957bb88-wqqml" event={"ID":"37b7c6b6-3229-4e8f-b403-8a57c3249e1e","Type":"ContainerStarted","Data":"b590a2181e862f7821a027ea3f66307812b93e5982dcc1cd197a3bda130a3671"} Nov 26 08:28:50 crc kubenswrapper[4909]: I1126 08:28:50.229408 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:50 crc kubenswrapper[4909]: I1126 08:28:50.229464 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:28:50 crc kubenswrapper[4909]: I1126 08:28:50.265572 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f4957bb88-wqqml" podStartSLOduration=3.2655504779999998 podStartE2EDuration="3.265550478s" podCreationTimestamp="2025-11-26 08:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:28:50.254271801 +0000 UTC m=+5302.400483027" watchObservedRunningTime="2025-11-26 08:28:50.265550478 +0000 UTC m=+5302.411761654" Nov 26 08:28:52 crc kubenswrapper[4909]: I1126 08:28:52.515441 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:28:52 crc kubenswrapper[4909]: I1126 08:28:52.609540 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8565f7649c-4pftv"] Nov 26 08:28:52 crc kubenswrapper[4909]: I1126 08:28:52.609892 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" podUID="19e3bd57-e081-4df9-adeb-06954615dd51" containerName="dnsmasq-dns" containerID="cri-o://b35da79673a7fead54a139acec62b4087487cacb593381a97ea1078371731d73" gracePeriod=10 Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.271979 4909 generic.go:334] "Generic (PLEG): container finished" podID="19e3bd57-e081-4df9-adeb-06954615dd51" containerID="b35da79673a7fead54a139acec62b4087487cacb593381a97ea1078371731d73" exitCode=0 Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.272198 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" event={"ID":"19e3bd57-e081-4df9-adeb-06954615dd51","Type":"ContainerDied","Data":"b35da79673a7fead54a139acec62b4087487cacb593381a97ea1078371731d73"} Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.387850 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.497082 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-dns-svc\") pod \"19e3bd57-e081-4df9-adeb-06954615dd51\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.497136 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-nb\") pod \"19e3bd57-e081-4df9-adeb-06954615dd51\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.497185 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-sb\") pod \"19e3bd57-e081-4df9-adeb-06954615dd51\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.497213 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-config\") pod \"19e3bd57-e081-4df9-adeb-06954615dd51\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.497243 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cbvf\" (UniqueName: \"kubernetes.io/projected/19e3bd57-e081-4df9-adeb-06954615dd51-kube-api-access-2cbvf\") pod \"19e3bd57-e081-4df9-adeb-06954615dd51\" (UID: \"19e3bd57-e081-4df9-adeb-06954615dd51\") " Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.509878 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e3bd57-e081-4df9-adeb-06954615dd51-kube-api-access-2cbvf" (OuterVolumeSpecName: "kube-api-access-2cbvf") pod "19e3bd57-e081-4df9-adeb-06954615dd51" (UID: "19e3bd57-e081-4df9-adeb-06954615dd51"). InnerVolumeSpecName "kube-api-access-2cbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.548744 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19e3bd57-e081-4df9-adeb-06954615dd51" (UID: "19e3bd57-e081-4df9-adeb-06954615dd51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.552347 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "19e3bd57-e081-4df9-adeb-06954615dd51" (UID: "19e3bd57-e081-4df9-adeb-06954615dd51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.555290 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "19e3bd57-e081-4df9-adeb-06954615dd51" (UID: "19e3bd57-e081-4df9-adeb-06954615dd51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.555339 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-config" (OuterVolumeSpecName: "config") pod "19e3bd57-e081-4df9-adeb-06954615dd51" (UID: "19e3bd57-e081-4df9-adeb-06954615dd51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.599050 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.599084 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.599112 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.599124 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e3bd57-e081-4df9-adeb-06954615dd51-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:53 crc kubenswrapper[4909]: I1126 08:28:53.599134 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cbvf\" (UniqueName: \"kubernetes.io/projected/19e3bd57-e081-4df9-adeb-06954615dd51-kube-api-access-2cbvf\") on node \"crc\" DevicePath \"\"" Nov 26 08:28:54 crc kubenswrapper[4909]: I1126 08:28:54.286698 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" event={"ID":"19e3bd57-e081-4df9-adeb-06954615dd51","Type":"ContainerDied","Data":"eef39ea9d5ac22b0449fe3bc76a0956938ba3a90795736d5f4a466197e67d94d"} Nov 26 08:28:54 crc kubenswrapper[4909]: I1126 08:28:54.286762 4909 scope.go:117] "RemoveContainer" containerID="b35da79673a7fead54a139acec62b4087487cacb593381a97ea1078371731d73" Nov 26 08:28:54 crc kubenswrapper[4909]: I1126 08:28:54.286793 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8565f7649c-4pftv" Nov 26 08:28:54 crc kubenswrapper[4909]: I1126 08:28:54.313453 4909 scope.go:117] "RemoveContainer" containerID="40523a2f379875e2b1b8986a73c606cbd75f1cb72a6f7d2a8b102fc83d2f5186" Nov 26 08:28:54 crc kubenswrapper[4909]: I1126 08:28:54.323741 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8565f7649c-4pftv"] Nov 26 08:28:54 crc kubenswrapper[4909]: I1126 08:28:54.329946 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8565f7649c-4pftv"] Nov 26 08:28:54 crc kubenswrapper[4909]: I1126 08:28:54.520582 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e3bd57-e081-4df9-adeb-06954615dd51" path="/var/lib/kubelet/pods/19e3bd57-e081-4df9-adeb-06954615dd51/volumes" Nov 26 08:29:19 crc kubenswrapper[4909]: I1126 08:29:19.279670 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:29:19 crc kubenswrapper[4909]: I1126 08:29:19.290570 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f4957bb88-wqqml" Nov 26 08:29:38 crc kubenswrapper[4909]: I1126 08:29:38.083332 4909 scope.go:117] "RemoveContainer" containerID="9da984132766679b49bddcee6458260bc6ab8ee6a34c04a3987d851b3318508b" Nov 26 08:29:38 crc kubenswrapper[4909]: I1126 08:29:38.114909 4909 scope.go:117] "RemoveContainer" containerID="3476f4a69722e91a35c14ce572ef194893b7d5c736784fbf15569b0235266687" Nov 26 08:29:38 crc kubenswrapper[4909]: I1126 08:29:38.183074 4909 scope.go:117] "RemoveContainer" containerID="148390318a352fbed1919f42225d443d21316df99cd0770883277ad45ff57d53" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.431756 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wtrjc"] Nov 26 08:29:40 crc kubenswrapper[4909]: E1126 08:29:40.432468 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e3bd57-e081-4df9-adeb-06954615dd51" containerName="dnsmasq-dns" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.432486 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e3bd57-e081-4df9-adeb-06954615dd51" containerName="dnsmasq-dns" Nov 26 08:29:40 crc kubenswrapper[4909]: E1126 08:29:40.432519 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e3bd57-e081-4df9-adeb-06954615dd51" containerName="init" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.432528 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e3bd57-e081-4df9-adeb-06954615dd51" containerName="init" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.432785 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e3bd57-e081-4df9-adeb-06954615dd51" containerName="dnsmasq-dns" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.433740 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wtrjc" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.444266 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wtrjc"] Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.511627 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-68n7m"] Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.514714 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-68n7m" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.522551 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-68n7m"] Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.544998 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhkms\" (UniqueName: \"kubernetes.io/projected/707619c0-8313-407a-965a-d1d4f6de44d1-kube-api-access-mhkms\") pod \"nova-api-db-create-wtrjc\" (UID: \"707619c0-8313-407a-965a-d1d4f6de44d1\") " pod="openstack/nova-api-db-create-wtrjc" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.545034 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmv6\" (UniqueName: \"kubernetes.io/projected/775ccc85-f26e-477f-a010-3ec3418ebadf-kube-api-access-mpmv6\") pod \"nova-cell0-db-create-68n7m\" (UID: \"775ccc85-f26e-477f-a010-3ec3418ebadf\") " pod="openstack/nova-cell0-db-create-68n7m" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.621536 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-cs2kv"] Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.623130 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cs2kv" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.629251 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-cs2kv"] Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.646788 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9g7\" (UniqueName: \"kubernetes.io/projected/2b5af06f-0585-46f5-8e74-1ad13113c497-kube-api-access-vm9g7\") pod \"nova-cell1-db-create-cs2kv\" (UID: \"2b5af06f-0585-46f5-8e74-1ad13113c497\") " pod="openstack/nova-cell1-db-create-cs2kv" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.646876 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhkms\" (UniqueName: \"kubernetes.io/projected/707619c0-8313-407a-965a-d1d4f6de44d1-kube-api-access-mhkms\") pod \"nova-api-db-create-wtrjc\" (UID: \"707619c0-8313-407a-965a-d1d4f6de44d1\") " pod="openstack/nova-api-db-create-wtrjc" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.646914 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmv6\" (UniqueName: \"kubernetes.io/projected/775ccc85-f26e-477f-a010-3ec3418ebadf-kube-api-access-mpmv6\") pod \"nova-cell0-db-create-68n7m\" (UID: \"775ccc85-f26e-477f-a010-3ec3418ebadf\") " pod="openstack/nova-cell0-db-create-68n7m" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.668138 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhkms\" (UniqueName: \"kubernetes.io/projected/707619c0-8313-407a-965a-d1d4f6de44d1-kube-api-access-mhkms\") pod \"nova-api-db-create-wtrjc\" (UID: \"707619c0-8313-407a-965a-d1d4f6de44d1\") " pod="openstack/nova-api-db-create-wtrjc" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.675755 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmv6\" (UniqueName: \"kubernetes.io/projected/775ccc85-f26e-477f-a010-3ec3418ebadf-kube-api-access-mpmv6\") pod \"nova-cell0-db-create-68n7m\" (UID: \"775ccc85-f26e-477f-a010-3ec3418ebadf\") " pod="openstack/nova-cell0-db-create-68n7m" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.748642 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9g7\" (UniqueName: \"kubernetes.io/projected/2b5af06f-0585-46f5-8e74-1ad13113c497-kube-api-access-vm9g7\") pod \"nova-cell1-db-create-cs2kv\" (UID: \"2b5af06f-0585-46f5-8e74-1ad13113c497\") " pod="openstack/nova-cell1-db-create-cs2kv" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.754567 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wtrjc" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.766114 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9g7\" (UniqueName: \"kubernetes.io/projected/2b5af06f-0585-46f5-8e74-1ad13113c497-kube-api-access-vm9g7\") pod \"nova-cell1-db-create-cs2kv\" (UID: \"2b5af06f-0585-46f5-8e74-1ad13113c497\") " pod="openstack/nova-cell1-db-create-cs2kv" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.837356 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-68n7m" Nov 26 08:29:40 crc kubenswrapper[4909]: I1126 08:29:40.955387 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cs2kv" Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.273810 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-68n7m"] Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.294037 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wtrjc"] Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.533166 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-cs2kv"] Nov 26 08:29:41 crc kubenswrapper[4909]: W1126 08:29:41.642282 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5af06f_0585_46f5_8e74_1ad13113c497.slice/crio-dddb4ef3fe03022aafdc8aab734f021ff53273e4240224474c9c7b59fa686278 WatchSource:0}: Error finding container dddb4ef3fe03022aafdc8aab734f021ff53273e4240224474c9c7b59fa686278: Status 404 returned error can't find the container with id dddb4ef3fe03022aafdc8aab734f021ff53273e4240224474c9c7b59fa686278 Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.846095 4909 generic.go:334] "Generic (PLEG): container finished" podID="775ccc85-f26e-477f-a010-3ec3418ebadf" containerID="573e12a59840bb398166b8ba0be27f790275cbe6794e3294286240c67ecdbb91" exitCode=0 Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.846170 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-68n7m" event={"ID":"775ccc85-f26e-477f-a010-3ec3418ebadf","Type":"ContainerDied","Data":"573e12a59840bb398166b8ba0be27f790275cbe6794e3294286240c67ecdbb91"} Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.846200 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-68n7m" event={"ID":"775ccc85-f26e-477f-a010-3ec3418ebadf","Type":"ContainerStarted","Data":"96806c06891e53acc6b0fe3285d3302bc0e4932412b18a61f6f7a4bf6b06e1dc"} Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.847647 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-cs2kv" event={"ID":"2b5af06f-0585-46f5-8e74-1ad13113c497","Type":"ContainerStarted","Data":"ea5bf925b062fd3b060411635121d5a4cd113564e0e57d9fcb4ac39c569d9d41"} Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.847692 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-cs2kv" event={"ID":"2b5af06f-0585-46f5-8e74-1ad13113c497","Type":"ContainerStarted","Data":"dddb4ef3fe03022aafdc8aab734f021ff53273e4240224474c9c7b59fa686278"} Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.849543 4909 generic.go:334] "Generic (PLEG): container finished" podID="707619c0-8313-407a-965a-d1d4f6de44d1" containerID="1bf873418f811ad54388b74e43a76a9c80177d6c80cbea33ee392a459c962262" exitCode=0 Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.849575 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wtrjc" event={"ID":"707619c0-8313-407a-965a-d1d4f6de44d1","Type":"ContainerDied","Data":"1bf873418f811ad54388b74e43a76a9c80177d6c80cbea33ee392a459c962262"} Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.849656 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wtrjc" event={"ID":"707619c0-8313-407a-965a-d1d4f6de44d1","Type":"ContainerStarted","Data":"d041bb69ae545754288f17b2b3cb87224878980b0a9ba655f2a6836e2049e169"} Nov 26 08:29:41 crc kubenswrapper[4909]: I1126 08:29:41.881120 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-cs2kv" podStartSLOduration=1.881104805 podStartE2EDuration="1.881104805s" podCreationTimestamp="2025-11-26 08:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:29:41.877339202 +0000 UTC m=+5354.023550368" watchObservedRunningTime="2025-11-26 08:29:41.881104805 +0000 UTC m=+5354.027315971" Nov 26 08:29:42 crc kubenswrapper[4909]: I1126 08:29:42.865488 4909 generic.go:334] "Generic (PLEG): container finished" podID="2b5af06f-0585-46f5-8e74-1ad13113c497" containerID="ea5bf925b062fd3b060411635121d5a4cd113564e0e57d9fcb4ac39c569d9d41" exitCode=0 Nov 26 08:29:42 crc kubenswrapper[4909]: I1126 08:29:42.865609 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-cs2kv" event={"ID":"2b5af06f-0585-46f5-8e74-1ad13113c497","Type":"ContainerDied","Data":"ea5bf925b062fd3b060411635121d5a4cd113564e0e57d9fcb4ac39c569d9d41"} Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.308973 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wtrjc" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.313033 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-68n7m" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.416340 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhkms\" (UniqueName: \"kubernetes.io/projected/707619c0-8313-407a-965a-d1d4f6de44d1-kube-api-access-mhkms\") pod \"707619c0-8313-407a-965a-d1d4f6de44d1\" (UID: \"707619c0-8313-407a-965a-d1d4f6de44d1\") " Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.416868 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpmv6\" (UniqueName: \"kubernetes.io/projected/775ccc85-f26e-477f-a010-3ec3418ebadf-kube-api-access-mpmv6\") pod \"775ccc85-f26e-477f-a010-3ec3418ebadf\" (UID: \"775ccc85-f26e-477f-a010-3ec3418ebadf\") " Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.422240 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775ccc85-f26e-477f-a010-3ec3418ebadf-kube-api-access-mpmv6" (OuterVolumeSpecName: "kube-api-access-mpmv6") pod "775ccc85-f26e-477f-a010-3ec3418ebadf" (UID: "775ccc85-f26e-477f-a010-3ec3418ebadf"). InnerVolumeSpecName "kube-api-access-mpmv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.422441 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707619c0-8313-407a-965a-d1d4f6de44d1-kube-api-access-mhkms" (OuterVolumeSpecName: "kube-api-access-mhkms") pod "707619c0-8313-407a-965a-d1d4f6de44d1" (UID: "707619c0-8313-407a-965a-d1d4f6de44d1"). InnerVolumeSpecName "kube-api-access-mhkms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.518832 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpmv6\" (UniqueName: \"kubernetes.io/projected/775ccc85-f26e-477f-a010-3ec3418ebadf-kube-api-access-mpmv6\") on node \"crc\" DevicePath \"\"" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.518869 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhkms\" (UniqueName: \"kubernetes.io/projected/707619c0-8313-407a-965a-d1d4f6de44d1-kube-api-access-mhkms\") on node \"crc\" DevicePath \"\"" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.876037 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-68n7m" event={"ID":"775ccc85-f26e-477f-a010-3ec3418ebadf","Type":"ContainerDied","Data":"96806c06891e53acc6b0fe3285d3302bc0e4932412b18a61f6f7a4bf6b06e1dc"} Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.876084 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96806c06891e53acc6b0fe3285d3302bc0e4932412b18a61f6f7a4bf6b06e1dc" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.877049 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-68n7m" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.878279 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wtrjc" Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.878294 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wtrjc" event={"ID":"707619c0-8313-407a-965a-d1d4f6de44d1","Type":"ContainerDied","Data":"d041bb69ae545754288f17b2b3cb87224878980b0a9ba655f2a6836e2049e169"} Nov 26 08:29:43 crc kubenswrapper[4909]: I1126 08:29:43.878342 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d041bb69ae545754288f17b2b3cb87224878980b0a9ba655f2a6836e2049e169" Nov 26 08:29:44 crc kubenswrapper[4909]: I1126 08:29:44.174748 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cs2kv" Nov 26 08:29:44 crc kubenswrapper[4909]: I1126 08:29:44.332885 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm9g7\" (UniqueName: \"kubernetes.io/projected/2b5af06f-0585-46f5-8e74-1ad13113c497-kube-api-access-vm9g7\") pod \"2b5af06f-0585-46f5-8e74-1ad13113c497\" (UID: \"2b5af06f-0585-46f5-8e74-1ad13113c497\") " Nov 26 08:29:44 crc kubenswrapper[4909]: I1126 08:29:44.340662 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5af06f-0585-46f5-8e74-1ad13113c497-kube-api-access-vm9g7" (OuterVolumeSpecName: "kube-api-access-vm9g7") pod "2b5af06f-0585-46f5-8e74-1ad13113c497" (UID: "2b5af06f-0585-46f5-8e74-1ad13113c497"). InnerVolumeSpecName "kube-api-access-vm9g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:29:44 crc kubenswrapper[4909]: I1126 08:29:44.434892 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm9g7\" (UniqueName: \"kubernetes.io/projected/2b5af06f-0585-46f5-8e74-1ad13113c497-kube-api-access-vm9g7\") on node \"crc\" DevicePath \"\"" Nov 26 08:29:44 crc kubenswrapper[4909]: I1126 08:29:44.891989 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-cs2kv" event={"ID":"2b5af06f-0585-46f5-8e74-1ad13113c497","Type":"ContainerDied","Data":"dddb4ef3fe03022aafdc8aab734f021ff53273e4240224474c9c7b59fa686278"} Nov 26 08:29:44 crc kubenswrapper[4909]: I1126 08:29:44.892047 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dddb4ef3fe03022aafdc8aab734f021ff53273e4240224474c9c7b59fa686278" Nov 26 08:29:44 crc kubenswrapper[4909]: I1126 08:29:44.892168 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cs2kv" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.736044 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7761-account-create-76j7n"] Nov 26 08:29:50 crc kubenswrapper[4909]: E1126 08:29:50.737080 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707619c0-8313-407a-965a-d1d4f6de44d1" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.737097 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="707619c0-8313-407a-965a-d1d4f6de44d1" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: E1126 08:29:50.737125 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775ccc85-f26e-477f-a010-3ec3418ebadf" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.737134 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="775ccc85-f26e-477f-a010-3ec3418ebadf" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: E1126 08:29:50.737147 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5af06f-0585-46f5-8e74-1ad13113c497" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.737156 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5af06f-0585-46f5-8e74-1ad13113c497" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.737405 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="707619c0-8313-407a-965a-d1d4f6de44d1" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.737435 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5af06f-0585-46f5-8e74-1ad13113c497" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.737453 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="775ccc85-f26e-477f-a010-3ec3418ebadf" containerName="mariadb-database-create" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.738339 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7761-account-create-76j7n" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.741928 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.749464 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7761-account-create-76j7n"] Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.787424 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8g7z\" (UniqueName: \"kubernetes.io/projected/99aa8a46-1169-42cc-8693-cd3e0d1f46a5-kube-api-access-d8g7z\") pod \"nova-api-7761-account-create-76j7n\" (UID: \"99aa8a46-1169-42cc-8693-cd3e0d1f46a5\") " pod="openstack/nova-api-7761-account-create-76j7n" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.888679 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8g7z\" (UniqueName: \"kubernetes.io/projected/99aa8a46-1169-42cc-8693-cd3e0d1f46a5-kube-api-access-d8g7z\") pod \"nova-api-7761-account-create-76j7n\" (UID: \"99aa8a46-1169-42cc-8693-cd3e0d1f46a5\") " pod="openstack/nova-api-7761-account-create-76j7n" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.923015 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4109-account-create-tt7nv"] Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.926254 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4109-account-create-tt7nv" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.930571 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.937760 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4109-account-create-tt7nv"] Nov 26 08:29:50 crc kubenswrapper[4909]: I1126 08:29:50.938508 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8g7z\" (UniqueName: \"kubernetes.io/projected/99aa8a46-1169-42cc-8693-cd3e0d1f46a5-kube-api-access-d8g7z\") pod \"nova-api-7761-account-create-76j7n\" (UID: \"99aa8a46-1169-42cc-8693-cd3e0d1f46a5\") " pod="openstack/nova-api-7761-account-create-76j7n" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.015472 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-30e2-account-create-c5rnd"] Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.016864 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30e2-account-create-c5rnd" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.020394 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.028703 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-30e2-account-create-c5rnd"] Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.073370 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7761-account-create-76j7n" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.092562 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvggc\" (UniqueName: \"kubernetes.io/projected/3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8-kube-api-access-xvggc\") pod \"nova-cell0-4109-account-create-tt7nv\" (UID: \"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8\") " pod="openstack/nova-cell0-4109-account-create-tt7nv" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.092692 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj98b\" (UniqueName: \"kubernetes.io/projected/032f0f9c-e208-4d1f-a169-4d39679324bb-kube-api-access-kj98b\") pod \"nova-cell1-30e2-account-create-c5rnd\" (UID: \"032f0f9c-e208-4d1f-a169-4d39679324bb\") " pod="openstack/nova-cell1-30e2-account-create-c5rnd" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.194163 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvggc\" (UniqueName: \"kubernetes.io/projected/3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8-kube-api-access-xvggc\") pod \"nova-cell0-4109-account-create-tt7nv\" (UID: \"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8\") " pod="openstack/nova-cell0-4109-account-create-tt7nv" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.194519 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj98b\" (UniqueName: \"kubernetes.io/projected/032f0f9c-e208-4d1f-a169-4d39679324bb-kube-api-access-kj98b\") pod \"nova-cell1-30e2-account-create-c5rnd\" (UID: \"032f0f9c-e208-4d1f-a169-4d39679324bb\") " pod="openstack/nova-cell1-30e2-account-create-c5rnd" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.215306 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvggc\" (UniqueName: \"kubernetes.io/projected/3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8-kube-api-access-xvggc\") pod \"nova-cell0-4109-account-create-tt7nv\" (UID: \"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8\") " pod="openstack/nova-cell0-4109-account-create-tt7nv" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.216078 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj98b\" (UniqueName: \"kubernetes.io/projected/032f0f9c-e208-4d1f-a169-4d39679324bb-kube-api-access-kj98b\") pod \"nova-cell1-30e2-account-create-c5rnd\" (UID: \"032f0f9c-e208-4d1f-a169-4d39679324bb\") " pod="openstack/nova-cell1-30e2-account-create-c5rnd" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.330080 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4109-account-create-tt7nv" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.336830 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30e2-account-create-c5rnd" Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.378817 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7761-account-create-76j7n"] Nov 26 08:29:51 crc kubenswrapper[4909]: W1126 08:29:51.389389 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99aa8a46_1169_42cc_8693_cd3e0d1f46a5.slice/crio-ea0343611d581ff41bdd4c457e9f355473b1f763eda4a89f46eefec62f082a22 WatchSource:0}: Error finding container ea0343611d581ff41bdd4c457e9f355473b1f763eda4a89f46eefec62f082a22: Status 404 returned error can't find the container with id ea0343611d581ff41bdd4c457e9f355473b1f763eda4a89f46eefec62f082a22 Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.778688 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4109-account-create-tt7nv"] Nov 26 08:29:51 crc kubenswrapper[4909]: W1126 08:29:51.779096 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dda3a1e_74f3_4183_b9cc_d29fac5fa0d8.slice/crio-76f588bbf719b0f0b29530d429b57fde4a6dd6b31d3b67d6322e0abf9b543a97 WatchSource:0}: Error finding container 76f588bbf719b0f0b29530d429b57fde4a6dd6b31d3b67d6322e0abf9b543a97: Status 404 returned error can't find the container with id 76f588bbf719b0f0b29530d429b57fde4a6dd6b31d3b67d6322e0abf9b543a97 Nov 26 08:29:51 crc kubenswrapper[4909]: W1126 08:29:51.828972 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod032f0f9c_e208_4d1f_a169_4d39679324bb.slice/crio-0a60476dc0a107a6c00c21992f5279bebd6018603c0cc843980e1a3a55a1d931 WatchSource:0}: Error finding container 0a60476dc0a107a6c00c21992f5279bebd6018603c0cc843980e1a3a55a1d931: Status 404 returned error can't find the container with id 0a60476dc0a107a6c00c21992f5279bebd6018603c0cc843980e1a3a55a1d931 Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.831911 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-30e2-account-create-c5rnd"] Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.971983 4909 generic.go:334] "Generic (PLEG): container finished" podID="99aa8a46-1169-42cc-8693-cd3e0d1f46a5" containerID="44b99459fe7c4cb0e7f06b4f118f75d72ae6c7712c7e6d7799b61efbdd561e6e" exitCode=0 Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.972171 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7761-account-create-76j7n" event={"ID":"99aa8a46-1169-42cc-8693-cd3e0d1f46a5","Type":"ContainerDied","Data":"44b99459fe7c4cb0e7f06b4f118f75d72ae6c7712c7e6d7799b61efbdd561e6e"} Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.972495 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7761-account-create-76j7n" event={"ID":"99aa8a46-1169-42cc-8693-cd3e0d1f46a5","Type":"ContainerStarted","Data":"ea0343611d581ff41bdd4c457e9f355473b1f763eda4a89f46eefec62f082a22"} Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.981501 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-30e2-account-create-c5rnd" event={"ID":"032f0f9c-e208-4d1f-a169-4d39679324bb","Type":"ContainerStarted","Data":"0a60476dc0a107a6c00c21992f5279bebd6018603c0cc843980e1a3a55a1d931"} Nov 26 08:29:51 crc kubenswrapper[4909]: I1126 08:29:51.990907 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4109-account-create-tt7nv" event={"ID":"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8","Type":"ContainerStarted","Data":"76f588bbf719b0f0b29530d429b57fde4a6dd6b31d3b67d6322e0abf9b543a97"} Nov 26 08:29:52 crc kubenswrapper[4909]: I1126 08:29:52.014669 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-4109-account-create-tt7nv" podStartSLOduration=2.014647268 podStartE2EDuration="2.014647268s" podCreationTimestamp="2025-11-26 08:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:29:52.006206168 +0000 UTC m=+5364.152417334" watchObservedRunningTime="2025-11-26 08:29:52.014647268 +0000 UTC m=+5364.160858424" Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.004660 4909 generic.go:334] "Generic (PLEG): container finished" podID="032f0f9c-e208-4d1f-a169-4d39679324bb" containerID="14bbe502d690d1baabfab18762ebc51a4dfcb544907664583813379c6cead027" exitCode=0 Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.004781 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-30e2-account-create-c5rnd" event={"ID":"032f0f9c-e208-4d1f-a169-4d39679324bb","Type":"ContainerDied","Data":"14bbe502d690d1baabfab18762ebc51a4dfcb544907664583813379c6cead027"} Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.021250 4909 generic.go:334] "Generic (PLEG): container finished" podID="3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8" containerID="2d3a09695f1e8fae8ea2d1bf9d39deba37613d72862c06b637a53e315dce8a34" exitCode=0 Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.021538 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4109-account-create-tt7nv" event={"ID":"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8","Type":"ContainerDied","Data":"2d3a09695f1e8fae8ea2d1bf9d39deba37613d72862c06b637a53e315dce8a34"} Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.368045 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7761-account-create-76j7n" Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.561332 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8g7z\" (UniqueName: \"kubernetes.io/projected/99aa8a46-1169-42cc-8693-cd3e0d1f46a5-kube-api-access-d8g7z\") pod \"99aa8a46-1169-42cc-8693-cd3e0d1f46a5\" (UID: \"99aa8a46-1169-42cc-8693-cd3e0d1f46a5\") " Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.576279 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99aa8a46-1169-42cc-8693-cd3e0d1f46a5-kube-api-access-d8g7z" (OuterVolumeSpecName: "kube-api-access-d8g7z") pod "99aa8a46-1169-42cc-8693-cd3e0d1f46a5" (UID: "99aa8a46-1169-42cc-8693-cd3e0d1f46a5"). InnerVolumeSpecName "kube-api-access-d8g7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:29:53 crc kubenswrapper[4909]: I1126 08:29:53.663564 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8g7z\" (UniqueName: \"kubernetes.io/projected/99aa8a46-1169-42cc-8693-cd3e0d1f46a5-kube-api-access-d8g7z\") on node \"crc\" DevicePath \"\"" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.037217 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7761-account-create-76j7n" event={"ID":"99aa8a46-1169-42cc-8693-cd3e0d1f46a5","Type":"ContainerDied","Data":"ea0343611d581ff41bdd4c457e9f355473b1f763eda4a89f46eefec62f082a22"} Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.037280 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0343611d581ff41bdd4c457e9f355473b1f763eda4a89f46eefec62f082a22" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.037479 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7761-account-create-76j7n" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.452175 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30e2-account-create-c5rnd" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.457492 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4109-account-create-tt7nv" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.478563 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj98b\" (UniqueName: \"kubernetes.io/projected/032f0f9c-e208-4d1f-a169-4d39679324bb-kube-api-access-kj98b\") pod \"032f0f9c-e208-4d1f-a169-4d39679324bb\" (UID: \"032f0f9c-e208-4d1f-a169-4d39679324bb\") " Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.478630 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvggc\" (UniqueName: \"kubernetes.io/projected/3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8-kube-api-access-xvggc\") pod \"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8\" (UID: \"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8\") " Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.483575 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032f0f9c-e208-4d1f-a169-4d39679324bb-kube-api-access-kj98b" (OuterVolumeSpecName: "kube-api-access-kj98b") pod "032f0f9c-e208-4d1f-a169-4d39679324bb" (UID: "032f0f9c-e208-4d1f-a169-4d39679324bb"). InnerVolumeSpecName "kube-api-access-kj98b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.490302 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8-kube-api-access-xvggc" (OuterVolumeSpecName: "kube-api-access-xvggc") pod "3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8" (UID: "3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8"). InnerVolumeSpecName "kube-api-access-xvggc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.580724 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj98b\" (UniqueName: \"kubernetes.io/projected/032f0f9c-e208-4d1f-a169-4d39679324bb-kube-api-access-kj98b\") on node \"crc\" DevicePath \"\"" Nov 26 08:29:54 crc kubenswrapper[4909]: I1126 08:29:54.580766 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvggc\" (UniqueName: \"kubernetes.io/projected/3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8-kube-api-access-xvggc\") on node \"crc\" DevicePath \"\"" Nov 26 08:29:55 crc kubenswrapper[4909]: I1126 08:29:55.053130 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-30e2-account-create-c5rnd" event={"ID":"032f0f9c-e208-4d1f-a169-4d39679324bb","Type":"ContainerDied","Data":"0a60476dc0a107a6c00c21992f5279bebd6018603c0cc843980e1a3a55a1d931"} Nov 26 08:29:55 crc kubenswrapper[4909]: I1126 08:29:55.053539 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a60476dc0a107a6c00c21992f5279bebd6018603c0cc843980e1a3a55a1d931" Nov 26 08:29:55 crc kubenswrapper[4909]: I1126 08:29:55.053207 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30e2-account-create-c5rnd" Nov 26 08:29:55 crc kubenswrapper[4909]: I1126 08:29:55.059111 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4109-account-create-tt7nv" event={"ID":"3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8","Type":"ContainerDied","Data":"76f588bbf719b0f0b29530d429b57fde4a6dd6b31d3b67d6322e0abf9b543a97"} Nov 26 08:29:55 crc kubenswrapper[4909]: I1126 08:29:55.059146 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76f588bbf719b0f0b29530d429b57fde4a6dd6b31d3b67d6322e0abf9b543a97" Nov 26 08:29:55 crc kubenswrapper[4909]: I1126 08:29:55.059224 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4109-account-create-tt7nv" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.210073 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gv4kn"] Nov 26 08:29:56 crc kubenswrapper[4909]: E1126 08:29:56.210496 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.210513 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: E1126 08:29:56.210541 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032f0f9c-e208-4d1f-a169-4d39679324bb" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.210552 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="032f0f9c-e208-4d1f-a169-4d39679324bb" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: E1126 08:29:56.210578 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aa8a46-1169-42cc-8693-cd3e0d1f46a5" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.210612 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aa8a46-1169-42cc-8693-cd3e0d1f46a5" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.210892 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.210918 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="99aa8a46-1169-42cc-8693-cd3e0d1f46a5" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.210948 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="032f0f9c-e208-4d1f-a169-4d39679324bb" containerName="mariadb-account-create" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.211631 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.216868 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.217129 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.218280 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nzwq8" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.228623 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gv4kn"] Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.313572 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgb2b\" (UniqueName: \"kubernetes.io/projected/b83c9ef4-529f-4ea9-8b56-461560b56616-kube-api-access-fgb2b\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.313654 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-config-data\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.313737 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.313916 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-scripts\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.414973 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.415046 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-scripts\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.415119 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgb2b\" (UniqueName: \"kubernetes.io/projected/b83c9ef4-529f-4ea9-8b56-461560b56616-kube-api-access-fgb2b\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.415140 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-config-data\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.424139 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-scripts\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.424364 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.434248 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-config-data\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.442392 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgb2b\" (UniqueName: \"kubernetes.io/projected/b83c9ef4-529f-4ea9-8b56-461560b56616-kube-api-access-fgb2b\") pod \"nova-cell0-conductor-db-sync-gv4kn\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:56 crc kubenswrapper[4909]: I1126 08:29:56.556019 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:29:57 crc kubenswrapper[4909]: I1126 08:29:57.055249 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gv4kn"] Nov 26 08:29:57 crc kubenswrapper[4909]: W1126 08:29:57.061429 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb83c9ef4_529f_4ea9_8b56_461560b56616.slice/crio-48ea42be6dca4b13bca3e74eae9070f3bf6632c6660b54d5dad97e34b5efa22a WatchSource:0}: Error finding container 48ea42be6dca4b13bca3e74eae9070f3bf6632c6660b54d5dad97e34b5efa22a: Status 404 returned error can't find the container with id 48ea42be6dca4b13bca3e74eae9070f3bf6632c6660b54d5dad97e34b5efa22a Nov 26 08:29:57 crc kubenswrapper[4909]: I1126 08:29:57.076924 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" event={"ID":"b83c9ef4-529f-4ea9-8b56-461560b56616","Type":"ContainerStarted","Data":"48ea42be6dca4b13bca3e74eae9070f3bf6632c6660b54d5dad97e34b5efa22a"} Nov 26 08:29:58 crc kubenswrapper[4909]: I1126 08:29:58.087671 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" event={"ID":"b83c9ef4-529f-4ea9-8b56-461560b56616","Type":"ContainerStarted","Data":"77fcf303adfbfbe695085b81938bf3c2fd7bc42e5dfe9a1a328198cc6b1e72bc"} Nov 26 08:29:58 crc kubenswrapper[4909]: I1126 08:29:58.118193 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" podStartSLOduration=2.118163172 podStartE2EDuration="2.118163172s" podCreationTimestamp="2025-11-26 08:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:29:58.104961572 +0000 UTC m=+5370.251172738" watchObservedRunningTime="2025-11-26 08:29:58.118163172 +0000 UTC m=+5370.264374368" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.198814 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd"] Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.201473 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.208539 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.208930 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.219614 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd"] Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.382502 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vkjt\" (UniqueName: \"kubernetes.io/projected/63ebc913-9174-47f4-a1c7-c299a0aba8dd-kube-api-access-2vkjt\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.382719 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ebc913-9174-47f4-a1c7-c299a0aba8dd-config-volume\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.382841 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ebc913-9174-47f4-a1c7-c299a0aba8dd-secret-volume\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.483870 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vkjt\" (UniqueName: \"kubernetes.io/projected/63ebc913-9174-47f4-a1c7-c299a0aba8dd-kube-api-access-2vkjt\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.483931 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ebc913-9174-47f4-a1c7-c299a0aba8dd-config-volume\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.483971 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ebc913-9174-47f4-a1c7-c299a0aba8dd-secret-volume\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.484849 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ebc913-9174-47f4-a1c7-c299a0aba8dd-config-volume\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.495270 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ebc913-9174-47f4-a1c7-c299a0aba8dd-secret-volume\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.505493 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vkjt\" (UniqueName: \"kubernetes.io/projected/63ebc913-9174-47f4-a1c7-c299a0aba8dd-kube-api-access-2vkjt\") pod \"collect-profiles-29402430-48kfd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:00 crc kubenswrapper[4909]: I1126 08:30:00.526437 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:01 crc kubenswrapper[4909]: I1126 08:30:01.021471 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd"] Nov 26 08:30:01 crc kubenswrapper[4909]: I1126 08:30:01.130894 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" event={"ID":"63ebc913-9174-47f4-a1c7-c299a0aba8dd","Type":"ContainerStarted","Data":"c2be50a1bd8b77b5c898ff1db601d8eb1d6147170ba7a1ed15671dec791de9e4"} Nov 26 08:30:02 crc kubenswrapper[4909]: I1126 08:30:02.146985 4909 generic.go:334] "Generic (PLEG): container finished" podID="63ebc913-9174-47f4-a1c7-c299a0aba8dd" containerID="b6602743c38cd37c83774262bb295d927b50a52e64d69325157a302506694128" exitCode=0 Nov 26 08:30:02 crc kubenswrapper[4909]: I1126 08:30:02.148818 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" event={"ID":"63ebc913-9174-47f4-a1c7-c299a0aba8dd","Type":"ContainerDied","Data":"b6602743c38cd37c83774262bb295d927b50a52e64d69325157a302506694128"} Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.159876 4909 generic.go:334] "Generic (PLEG): container finished" podID="b83c9ef4-529f-4ea9-8b56-461560b56616" containerID="77fcf303adfbfbe695085b81938bf3c2fd7bc42e5dfe9a1a328198cc6b1e72bc" exitCode=0 Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.159962 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" event={"ID":"b83c9ef4-529f-4ea9-8b56-461560b56616","Type":"ContainerDied","Data":"77fcf303adfbfbe695085b81938bf3c2fd7bc42e5dfe9a1a328198cc6b1e72bc"} Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.489259 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.650244 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ebc913-9174-47f4-a1c7-c299a0aba8dd-config-volume\") pod \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.650914 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vkjt\" (UniqueName: \"kubernetes.io/projected/63ebc913-9174-47f4-a1c7-c299a0aba8dd-kube-api-access-2vkjt\") pod \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.651088 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ebc913-9174-47f4-a1c7-c299a0aba8dd-secret-volume\") pod \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\" (UID: \"63ebc913-9174-47f4-a1c7-c299a0aba8dd\") " Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.651823 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ebc913-9174-47f4-a1c7-c299a0aba8dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "63ebc913-9174-47f4-a1c7-c299a0aba8dd" (UID: "63ebc913-9174-47f4-a1c7-c299a0aba8dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.657682 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ebc913-9174-47f4-a1c7-c299a0aba8dd-kube-api-access-2vkjt" (OuterVolumeSpecName: "kube-api-access-2vkjt") pod "63ebc913-9174-47f4-a1c7-c299a0aba8dd" (UID: "63ebc913-9174-47f4-a1c7-c299a0aba8dd"). InnerVolumeSpecName "kube-api-access-2vkjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.659802 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ebc913-9174-47f4-a1c7-c299a0aba8dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63ebc913-9174-47f4-a1c7-c299a0aba8dd" (UID: "63ebc913-9174-47f4-a1c7-c299a0aba8dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.752925 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vkjt\" (UniqueName: \"kubernetes.io/projected/63ebc913-9174-47f4-a1c7-c299a0aba8dd-kube-api-access-2vkjt\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.752957 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63ebc913-9174-47f4-a1c7-c299a0aba8dd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:03 crc kubenswrapper[4909]: I1126 08:30:03.752967 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ebc913-9174-47f4-a1c7-c299a0aba8dd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.172568 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.172568 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd" event={"ID":"63ebc913-9174-47f4-a1c7-c299a0aba8dd","Type":"ContainerDied","Data":"c2be50a1bd8b77b5c898ff1db601d8eb1d6147170ba7a1ed15671dec791de9e4"} Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.172651 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2be50a1bd8b77b5c898ff1db601d8eb1d6147170ba7a1ed15671dec791de9e4" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.555035 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.567628 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc"] Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.580470 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402385-kx5qc"] Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.667951 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgb2b\" (UniqueName: \"kubernetes.io/projected/b83c9ef4-529f-4ea9-8b56-461560b56616-kube-api-access-fgb2b\") pod \"b83c9ef4-529f-4ea9-8b56-461560b56616\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.668006 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-scripts\") pod \"b83c9ef4-529f-4ea9-8b56-461560b56616\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.668102 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-combined-ca-bundle\") pod \"b83c9ef4-529f-4ea9-8b56-461560b56616\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.668129 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-config-data\") pod \"b83c9ef4-529f-4ea9-8b56-461560b56616\" (UID: \"b83c9ef4-529f-4ea9-8b56-461560b56616\") " Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.673080 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83c9ef4-529f-4ea9-8b56-461560b56616-kube-api-access-fgb2b" (OuterVolumeSpecName: "kube-api-access-fgb2b") pod "b83c9ef4-529f-4ea9-8b56-461560b56616" (UID: "b83c9ef4-529f-4ea9-8b56-461560b56616"). InnerVolumeSpecName "kube-api-access-fgb2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.673693 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-scripts" (OuterVolumeSpecName: "scripts") pod "b83c9ef4-529f-4ea9-8b56-461560b56616" (UID: "b83c9ef4-529f-4ea9-8b56-461560b56616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.701217 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b83c9ef4-529f-4ea9-8b56-461560b56616" (UID: "b83c9ef4-529f-4ea9-8b56-461560b56616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.711190 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-config-data" (OuterVolumeSpecName: "config-data") pod "b83c9ef4-529f-4ea9-8b56-461560b56616" (UID: "b83c9ef4-529f-4ea9-8b56-461560b56616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.771382 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.771435 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.771455 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83c9ef4-529f-4ea9-8b56-461560b56616-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:04 crc kubenswrapper[4909]: I1126 08:30:04.771472 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgb2b\" (UniqueName: \"kubernetes.io/projected/b83c9ef4-529f-4ea9-8b56-461560b56616-kube-api-access-fgb2b\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.192662 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" event={"ID":"b83c9ef4-529f-4ea9-8b56-461560b56616","Type":"ContainerDied","Data":"48ea42be6dca4b13bca3e74eae9070f3bf6632c6660b54d5dad97e34b5efa22a"} Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.192738 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ea42be6dca4b13bca3e74eae9070f3bf6632c6660b54d5dad97e34b5efa22a" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.192839 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gv4kn" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.293886 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:30:05 crc kubenswrapper[4909]: E1126 08:30:05.294248 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ebc913-9174-47f4-a1c7-c299a0aba8dd" containerName="collect-profiles" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.294266 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ebc913-9174-47f4-a1c7-c299a0aba8dd" containerName="collect-profiles" Nov 26 08:30:05 crc kubenswrapper[4909]: E1126 08:30:05.294288 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83c9ef4-529f-4ea9-8b56-461560b56616" containerName="nova-cell0-conductor-db-sync" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.294294 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83c9ef4-529f-4ea9-8b56-461560b56616" containerName="nova-cell0-conductor-db-sync" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.294476 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83c9ef4-529f-4ea9-8b56-461560b56616" containerName="nova-cell0-conductor-db-sync" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.294496 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ebc913-9174-47f4-a1c7-c299a0aba8dd" containerName="collect-profiles" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.295144 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.297068 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nzwq8" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.297865 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.317803 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.484669 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnftp\" (UniqueName: \"kubernetes.io/projected/9fab9a27-8a55-4940-9006-be7909597eff-kube-api-access-jnftp\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.484753 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.484839 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.586407 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnftp\" (UniqueName: \"kubernetes.io/projected/9fab9a27-8a55-4940-9006-be7909597eff-kube-api-access-jnftp\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.586494 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.586536 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.593240 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.597640 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.607855 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnftp\" (UniqueName: \"kubernetes.io/projected/9fab9a27-8a55-4940-9006-be7909597eff-kube-api-access-jnftp\") pod \"nova-cell0-conductor-0\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:05 crc kubenswrapper[4909]: I1126 08:30:05.614714 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:06 crc kubenswrapper[4909]: I1126 08:30:06.097584 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:30:06 crc kubenswrapper[4909]: I1126 08:30:06.204183 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9fab9a27-8a55-4940-9006-be7909597eff","Type":"ContainerStarted","Data":"130cb0f9c458f80b4443038a17c7c1140c4b35bbae2ae8b9286d971431c26937"} Nov 26 08:30:06 crc kubenswrapper[4909]: I1126 08:30:06.519836 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe923a1-15bb-4e6d-bb3d-5eecfd55f843" path="/var/lib/kubelet/pods/7fe923a1-15bb-4e6d-bb3d-5eecfd55f843/volumes" Nov 26 08:30:07 crc kubenswrapper[4909]: I1126 08:30:07.219800 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9fab9a27-8a55-4940-9006-be7909597eff","Type":"ContainerStarted","Data":"3b81c3f18f99dfa911806847beaacd3c5cc182f9ca0bf56045a82e37d930f84b"} Nov 26 08:30:07 crc kubenswrapper[4909]: I1126 08:30:07.220255 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:07 crc kubenswrapper[4909]: I1126 08:30:07.252172 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.252146513 podStartE2EDuration="2.252146513s" podCreationTimestamp="2025-11-26 08:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:07.242638524 +0000 UTC m=+5379.388849730" watchObservedRunningTime="2025-11-26 08:30:07.252146513 +0000 UTC m=+5379.398357719" Nov 26 08:30:15 crc kubenswrapper[4909]: I1126 08:30:15.661401 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.212209 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mg6cp"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.213943 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.217863 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.219189 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.238929 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mg6cp"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.302376 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-config-data\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.302946 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-scripts\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.302997 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.303027 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqkpq\" (UniqueName: \"kubernetes.io/projected/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-kube-api-access-mqkpq\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.399798 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.400944 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.404807 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-config-data\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.404894 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-scripts\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.404927 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.404950 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqkpq\" (UniqueName: \"kubernetes.io/projected/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-kube-api-access-mqkpq\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.407140 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.414092 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-config-data\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.415170 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.418043 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-scripts\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.430069 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqkpq\" (UniqueName: \"kubernetes.io/projected/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-kube-api-access-mqkpq\") pod \"nova-cell0-cell-mapping-mg6cp\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.432118 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.433307 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.454371 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.460724 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.474309 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.506285 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.506373 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-config-data\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.506404 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.506427 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqwsb\" (UniqueName: \"kubernetes.io/projected/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-kube-api-access-xqwsb\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.506448 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.506469 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlld\" (UniqueName: \"kubernetes.io/projected/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-kube-api-access-8nlld\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.532527 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.551650 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.553923 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.554004 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.557797 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.618844 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8g8x\" (UniqueName: \"kubernetes.io/projected/aa84e94e-86c4-45f5-856d-55053bb099a8-kube-api-access-r8g8x\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.618892 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.618982 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.619023 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-config-data\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.619038 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa84e94e-86c4-45f5-856d-55053bb099a8-logs\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.619076 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-config-data\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.619108 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.619129 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqwsb\" (UniqueName: \"kubernetes.io/projected/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-kube-api-access-xqwsb\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.619150 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.619172 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlld\" (UniqueName: \"kubernetes.io/projected/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-kube-api-access-8nlld\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.650216 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.653071 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.669098 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.673662 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-config-data\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.701874 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqwsb\" (UniqueName: \"kubernetes.io/projected/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-kube-api-access-xqwsb\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.702710 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlld\" (UniqueName: \"kubernetes.io/projected/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-kube-api-access-8nlld\") pod \"nova-scheduler-0\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.723354 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-config-data\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.723413 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa84e94e-86c4-45f5-856d-55053bb099a8-logs\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.723716 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8g8x\" (UniqueName: \"kubernetes.io/projected/aa84e94e-86c4-45f5-856d-55053bb099a8-kube-api-access-r8g8x\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.723760 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.725246 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa84e94e-86c4-45f5-856d-55053bb099a8-logs\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.732323 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.735848 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-config-data\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.747416 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8g8x\" (UniqueName: \"kubernetes.io/projected/aa84e94e-86c4-45f5-856d-55053bb099a8-kube-api-access-r8g8x\") pod \"nova-metadata-0\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.783968 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.786449 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.792159 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.823001 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.824067 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69dc7db885-gzxxg"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.825819 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.828376 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.845547 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.855871 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69dc7db885-gzxxg"] Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.885674 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.929522 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-sb\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.929616 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.929640 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-dns-svc\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.930052 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-config-data\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.930237 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57lh9\" (UniqueName: \"kubernetes.io/projected/dc91e68f-682c-4264-bba3-20021adc7996-kube-api-access-57lh9\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.930475 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-config\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.930529 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac6df06-7bc0-4e8d-acb5-03205f191eef-logs\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.930619 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tx4n\" (UniqueName: \"kubernetes.io/projected/1ac6df06-7bc0-4e8d-acb5-03205f191eef-kube-api-access-5tx4n\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:16 crc kubenswrapper[4909]: I1126 08:30:16.930650 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-nb\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.009063 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mg6cp"] Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032097 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57lh9\" (UniqueName: \"kubernetes.io/projected/dc91e68f-682c-4264-bba3-20021adc7996-kube-api-access-57lh9\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032186 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-config\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032212 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac6df06-7bc0-4e8d-acb5-03205f191eef-logs\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032235 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tx4n\" (UniqueName: \"kubernetes.io/projected/1ac6df06-7bc0-4e8d-acb5-03205f191eef-kube-api-access-5tx4n\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032256 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-nb\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032285 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-sb\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032314 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032330 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-dns-svc\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.032380 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-config-data\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.034119 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac6df06-7bc0-4e8d-acb5-03205f191eef-logs\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.034443 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-config\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.035126 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-sb\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.035234 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-nb\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.035843 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-config-data\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.035930 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-dns-svc\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.041958 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.051375 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57lh9\" (UniqueName: \"kubernetes.io/projected/dc91e68f-682c-4264-bba3-20021adc7996-kube-api-access-57lh9\") pod \"dnsmasq-dns-69dc7db885-gzxxg\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.053108 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tx4n\" (UniqueName: \"kubernetes.io/projected/1ac6df06-7bc0-4e8d-acb5-03205f191eef-kube-api-access-5tx4n\") pod \"nova-api-0\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.123685 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.150617 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.211782 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-npkqd"] Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.213531 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.215956 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.216455 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.221017 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-npkqd"] Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.338223 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mg6cp" event={"ID":"49eabb59-89f9-4fbd-8b96-7a7464bdaf30","Type":"ContainerStarted","Data":"45b22906308fa145f2aa9949cf531b960406466c58ab7b4f9f78334e245cf799"} Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.338617 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mg6cp" event={"ID":"49eabb59-89f9-4fbd-8b96-7a7464bdaf30","Type":"ContainerStarted","Data":"ac17d152121b30d6a38b97e8414817cae5b58d9b87a93485236dc06970de3fb3"} Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.357647 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-config-data\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.357735 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.357762 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-scripts\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.357826 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrb2\" (UniqueName: \"kubernetes.io/projected/a4d00dc1-4670-458f-842f-333fa41779ca-kube-api-access-wwrb2\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.385546 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.398400 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mg6cp" podStartSLOduration=1.3983737330000001 podStartE2EDuration="1.398373733s" podCreationTimestamp="2025-11-26 08:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:17.363840051 +0000 UTC m=+5389.510051217" watchObservedRunningTime="2025-11-26 08:30:17.398373733 +0000 UTC m=+5389.544584899" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.460112 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrb2\" (UniqueName: \"kubernetes.io/projected/a4d00dc1-4670-458f-842f-333fa41779ca-kube-api-access-wwrb2\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.460253 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-config-data\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.460318 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.460339 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-scripts\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.472579 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-config-data\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.472945 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.475420 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-scripts\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.482790 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.487141 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.496184 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrb2\" (UniqueName: \"kubernetes.io/projected/a4d00dc1-4670-458f-842f-333fa41779ca-kube-api-access-wwrb2\") pod \"nova-cell1-conductor-db-sync-npkqd\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.497897 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:17 crc kubenswrapper[4909]: W1126 08:30:17.498235 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ac6df06_7bc0_4e8d_acb5_03205f191eef.slice/crio-01facaebc045405b232770efe6847f138550158217c726f066f93d71b1518d1e WatchSource:0}: Error finding container 01facaebc045405b232770efe6847f138550158217c726f066f93d71b1518d1e: Status 404 returned error can't find the container with id 01facaebc045405b232770efe6847f138550158217c726f066f93d71b1518d1e Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.548056 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:17 crc kubenswrapper[4909]: I1126 08:30:17.761607 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69dc7db885-gzxxg"] Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.083364 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-npkqd"] Nov 26 08:30:18 crc kubenswrapper[4909]: W1126 08:30:18.154036 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4d00dc1_4670_458f_842f_333fa41779ca.slice/crio-d5c26a6c0e219749a0890dcaffb801e5a352d5a61d1ea9154d1dbd3b00780728 WatchSource:0}: Error finding container d5c26a6c0e219749a0890dcaffb801e5a352d5a61d1ea9154d1dbd3b00780728: Status 404 returned error can't find the container with id d5c26a6c0e219749a0890dcaffb801e5a352d5a61d1ea9154d1dbd3b00780728 Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.353297 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa84e94e-86c4-45f5-856d-55053bb099a8","Type":"ContainerStarted","Data":"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.353344 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa84e94e-86c4-45f5-856d-55053bb099a8","Type":"ContainerStarted","Data":"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.353356 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa84e94e-86c4-45f5-856d-55053bb099a8","Type":"ContainerStarted","Data":"f89727f6939d54f4ee5cb00fba461e6782c24565032d014b998c76efd19d511d"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.356713 4909 generic.go:334] "Generic (PLEG): container finished" podID="dc91e68f-682c-4264-bba3-20021adc7996" containerID="47b15f09d246279e96bd9311f487ed379c5e4789f6de9422a06a7dc3802a80b7" exitCode=0 Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.356770 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" event={"ID":"dc91e68f-682c-4264-bba3-20021adc7996","Type":"ContainerDied","Data":"47b15f09d246279e96bd9311f487ed379c5e4789f6de9422a06a7dc3802a80b7"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.356792 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" event={"ID":"dc91e68f-682c-4264-bba3-20021adc7996","Type":"ContainerStarted","Data":"03f9e03eccd814fd50f4d5246a3a4876603ed27a069d59bb9178f89cefd2b27b"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.363297 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ac6df06-7bc0-4e8d-acb5-03205f191eef","Type":"ContainerStarted","Data":"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.363343 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ac6df06-7bc0-4e8d-acb5-03205f191eef","Type":"ContainerStarted","Data":"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.363354 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ac6df06-7bc0-4e8d-acb5-03205f191eef","Type":"ContainerStarted","Data":"01facaebc045405b232770efe6847f138550158217c726f066f93d71b1518d1e"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.365474 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-npkqd" event={"ID":"a4d00dc1-4670-458f-842f-333fa41779ca","Type":"ContainerStarted","Data":"7020f8eec2e4d8de75c04b71efa4b6a2066f971e7ab0f1d508998ceeb2cfbabf"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.365500 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-npkqd" event={"ID":"a4d00dc1-4670-458f-842f-333fa41779ca","Type":"ContainerStarted","Data":"d5c26a6c0e219749a0890dcaffb801e5a352d5a61d1ea9154d1dbd3b00780728"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.367708 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c0447bc-e6bf-436b-b47a-ebbec10e36e5","Type":"ContainerStarted","Data":"3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.367732 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c0447bc-e6bf-436b-b47a-ebbec10e36e5","Type":"ContainerStarted","Data":"ffa398c407d9466866efe4725ac3b7136e74ba288636dc64bbf306804c3c6bb1"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.370762 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0a0db1d6-8015-4a27-9c53-ad181dceb4eb","Type":"ContainerStarted","Data":"eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.370788 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0a0db1d6-8015-4a27-9c53-ad181dceb4eb","Type":"ContainerStarted","Data":"0ea65c78dafd70907f00212d3aaebd9ba2c95d88b7f12c5d6ffcff867df1e91f"} Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.380215 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.380199061 podStartE2EDuration="2.380199061s" podCreationTimestamp="2025-11-26 08:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:18.375432252 +0000 UTC m=+5390.521643418" watchObservedRunningTime="2025-11-26 08:30:18.380199061 +0000 UTC m=+5390.526410227" Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.453547 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.453529014 podStartE2EDuration="2.453529014s" podCreationTimestamp="2025-11-26 08:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:18.445744561 +0000 UTC m=+5390.591955727" watchObservedRunningTime="2025-11-26 08:30:18.453529014 +0000 UTC m=+5390.599740180" Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.520087 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.52006967 podStartE2EDuration="2.52006967s" podCreationTimestamp="2025-11-26 08:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:18.500956488 +0000 UTC m=+5390.647167654" watchObservedRunningTime="2025-11-26 08:30:18.52006967 +0000 UTC m=+5390.666280836" Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.588193 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.58816112 podStartE2EDuration="2.58816112s" podCreationTimestamp="2025-11-26 08:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:18.577418567 +0000 UTC m=+5390.723629723" watchObservedRunningTime="2025-11-26 08:30:18.58816112 +0000 UTC m=+5390.734372276" Nov 26 08:30:18 crc kubenswrapper[4909]: I1126 08:30:18.588371 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-npkqd" podStartSLOduration=1.588366206 podStartE2EDuration="1.588366206s" podCreationTimestamp="2025-11-26 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:18.54933518 +0000 UTC m=+5390.695546346" watchObservedRunningTime="2025-11-26 08:30:18.588366206 +0000 UTC m=+5390.734577362" Nov 26 08:30:19 crc kubenswrapper[4909]: I1126 08:30:19.383029 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" event={"ID":"dc91e68f-682c-4264-bba3-20021adc7996","Type":"ContainerStarted","Data":"04fabe8516c77486a4e7fb80559f8abc7ed90f75ebbe157c43c7895ab2544082"} Nov 26 08:30:19 crc kubenswrapper[4909]: I1126 08:30:19.417701 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" podStartSLOduration=3.417670009 podStartE2EDuration="3.417670009s" podCreationTimestamp="2025-11-26 08:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:19.407381128 +0000 UTC m=+5391.553592294" watchObservedRunningTime="2025-11-26 08:30:19.417670009 +0000 UTC m=+5391.563881195" Nov 26 08:30:20 crc kubenswrapper[4909]: I1126 08:30:20.391110 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:21 crc kubenswrapper[4909]: I1126 08:30:21.405305 4909 generic.go:334] "Generic (PLEG): container finished" podID="a4d00dc1-4670-458f-842f-333fa41779ca" containerID="7020f8eec2e4d8de75c04b71efa4b6a2066f971e7ab0f1d508998ceeb2cfbabf" exitCode=0 Nov 26 08:30:21 crc kubenswrapper[4909]: I1126 08:30:21.405449 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-npkqd" event={"ID":"a4d00dc1-4670-458f-842f-333fa41779ca","Type":"ContainerDied","Data":"7020f8eec2e4d8de75c04b71efa4b6a2066f971e7ab0f1d508998ceeb2cfbabf"} Nov 26 08:30:21 crc kubenswrapper[4909]: I1126 08:30:21.824115 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 08:30:21 crc kubenswrapper[4909]: I1126 08:30:21.829392 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:21 crc kubenswrapper[4909]: I1126 08:30:21.889252 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:30:21 crc kubenswrapper[4909]: I1126 08:30:21.889360 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.417615 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mg6cp" event={"ID":"49eabb59-89f9-4fbd-8b96-7a7464bdaf30","Type":"ContainerDied","Data":"45b22906308fa145f2aa9949cf531b960406466c58ab7b4f9f78334e245cf799"} Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.417627 4909 generic.go:334] "Generic (PLEG): container finished" podID="49eabb59-89f9-4fbd-8b96-7a7464bdaf30" containerID="45b22906308fa145f2aa9949cf531b960406466c58ab7b4f9f78334e245cf799" exitCode=0 Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.764620 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.890678 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-config-data\") pod \"a4d00dc1-4670-458f-842f-333fa41779ca\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.890753 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-scripts\") pod \"a4d00dc1-4670-458f-842f-333fa41779ca\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.890802 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-combined-ca-bundle\") pod \"a4d00dc1-4670-458f-842f-333fa41779ca\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.890941 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrb2\" (UniqueName: \"kubernetes.io/projected/a4d00dc1-4670-458f-842f-333fa41779ca-kube-api-access-wwrb2\") pod \"a4d00dc1-4670-458f-842f-333fa41779ca\" (UID: \"a4d00dc1-4670-458f-842f-333fa41779ca\") " Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.897360 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d00dc1-4670-458f-842f-333fa41779ca-kube-api-access-wwrb2" (OuterVolumeSpecName: "kube-api-access-wwrb2") pod "a4d00dc1-4670-458f-842f-333fa41779ca" (UID: "a4d00dc1-4670-458f-842f-333fa41779ca"). InnerVolumeSpecName "kube-api-access-wwrb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.912121 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-scripts" (OuterVolumeSpecName: "scripts") pod "a4d00dc1-4670-458f-842f-333fa41779ca" (UID: "a4d00dc1-4670-458f-842f-333fa41779ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.931814 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-config-data" (OuterVolumeSpecName: "config-data") pod "a4d00dc1-4670-458f-842f-333fa41779ca" (UID: "a4d00dc1-4670-458f-842f-333fa41779ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.943285 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4d00dc1-4670-458f-842f-333fa41779ca" (UID: "a4d00dc1-4670-458f-842f-333fa41779ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.993418 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.993460 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.993473 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d00dc1-4670-458f-842f-333fa41779ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:22 crc kubenswrapper[4909]: I1126 08:30:22.993489 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwrb2\" (UniqueName: \"kubernetes.io/projected/a4d00dc1-4670-458f-842f-333fa41779ca-kube-api-access-wwrb2\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.431176 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-npkqd" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.431234 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-npkqd" event={"ID":"a4d00dc1-4670-458f-842f-333fa41779ca","Type":"ContainerDied","Data":"d5c26a6c0e219749a0890dcaffb801e5a352d5a61d1ea9154d1dbd3b00780728"} Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.431298 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c26a6c0e219749a0890dcaffb801e5a352d5a61d1ea9154d1dbd3b00780728" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.527990 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:30:23 crc kubenswrapper[4909]: E1126 08:30:23.528357 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d00dc1-4670-458f-842f-333fa41779ca" containerName="nova-cell1-conductor-db-sync" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.528374 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d00dc1-4670-458f-842f-333fa41779ca" containerName="nova-cell1-conductor-db-sync" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.528561 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d00dc1-4670-458f-842f-333fa41779ca" containerName="nova-cell1-conductor-db-sync" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.529196 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.531907 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.562115 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.606842 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.606910 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.606935 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnn66\" (UniqueName: \"kubernetes.io/projected/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-kube-api-access-wnn66\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.709013 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.709315 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.709334 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnn66\" (UniqueName: \"kubernetes.io/projected/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-kube-api-access-wnn66\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.715953 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.730761 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.733506 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnn66\" (UniqueName: \"kubernetes.io/projected/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-kube-api-access-wnn66\") pod \"nova-cell1-conductor-0\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.820138 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.859621 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.912277 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-combined-ca-bundle\") pod \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.912552 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqkpq\" (UniqueName: \"kubernetes.io/projected/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-kube-api-access-mqkpq\") pod \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.912703 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-scripts\") pod \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.912837 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-config-data\") pod \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\" (UID: \"49eabb59-89f9-4fbd-8b96-7a7464bdaf30\") " Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.915935 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-scripts" (OuterVolumeSpecName: "scripts") pod "49eabb59-89f9-4fbd-8b96-7a7464bdaf30" (UID: "49eabb59-89f9-4fbd-8b96-7a7464bdaf30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.922770 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-kube-api-access-mqkpq" (OuterVolumeSpecName: "kube-api-access-mqkpq") pod "49eabb59-89f9-4fbd-8b96-7a7464bdaf30" (UID: "49eabb59-89f9-4fbd-8b96-7a7464bdaf30"). InnerVolumeSpecName "kube-api-access-mqkpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.940909 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-config-data" (OuterVolumeSpecName: "config-data") pod "49eabb59-89f9-4fbd-8b96-7a7464bdaf30" (UID: "49eabb59-89f9-4fbd-8b96-7a7464bdaf30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:23 crc kubenswrapper[4909]: I1126 08:30:23.953901 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49eabb59-89f9-4fbd-8b96-7a7464bdaf30" (UID: "49eabb59-89f9-4fbd-8b96-7a7464bdaf30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.015304 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.015660 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.015675 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqkpq\" (UniqueName: \"kubernetes.io/projected/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-kube-api-access-mqkpq\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.015685 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49eabb59-89f9-4fbd-8b96-7a7464bdaf30-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.308380 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:30:24 crc kubenswrapper[4909]: W1126 08:30:24.318115 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b3a4ae_809b_43b1_b4e1_3d4815a7a714.slice/crio-80ada2ffc25978000898eec42ba1f609341f7c0da12f5588a7dce7d602d266c1 WatchSource:0}: Error finding container 80ada2ffc25978000898eec42ba1f609341f7c0da12f5588a7dce7d602d266c1: Status 404 returned error can't find the container with id 80ada2ffc25978000898eec42ba1f609341f7c0da12f5588a7dce7d602d266c1 Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.443402 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714","Type":"ContainerStarted","Data":"80ada2ffc25978000898eec42ba1f609341f7c0da12f5588a7dce7d602d266c1"} Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.446337 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mg6cp" event={"ID":"49eabb59-89f9-4fbd-8b96-7a7464bdaf30","Type":"ContainerDied","Data":"ac17d152121b30d6a38b97e8414817cae5b58d9b87a93485236dc06970de3fb3"} Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.446384 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac17d152121b30d6a38b97e8414817cae5b58d9b87a93485236dc06970de3fb3" Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.446459 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mg6cp" Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.677924 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.678466 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-log" containerID="cri-o://c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc" gracePeriod=30 Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.679772 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-api" containerID="cri-o://6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a" gracePeriod=30 Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.692200 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.692471 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7c0447bc-e6bf-436b-b47a-ebbec10e36e5" containerName="nova-scheduler-scheduler" containerID="cri-o://3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5" gracePeriod=30 Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.722798 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.723020 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-log" containerID="cri-o://8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a" gracePeriod=30 Nov 26 08:30:24 crc kubenswrapper[4909]: I1126 08:30:24.723169 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-metadata" containerID="cri-o://af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f" gracePeriod=30 Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.257504 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.263760 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340074 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-combined-ca-bundle\") pod \"aa84e94e-86c4-45f5-856d-55053bb099a8\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340155 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac6df06-7bc0-4e8d-acb5-03205f191eef-logs\") pod \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340184 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-config-data\") pod \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340217 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-combined-ca-bundle\") pod \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340263 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa84e94e-86c4-45f5-856d-55053bb099a8-logs\") pod \"aa84e94e-86c4-45f5-856d-55053bb099a8\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340293 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8g8x\" (UniqueName: \"kubernetes.io/projected/aa84e94e-86c4-45f5-856d-55053bb099a8-kube-api-access-r8g8x\") pod \"aa84e94e-86c4-45f5-856d-55053bb099a8\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340405 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-config-data\") pod \"aa84e94e-86c4-45f5-856d-55053bb099a8\" (UID: \"aa84e94e-86c4-45f5-856d-55053bb099a8\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340451 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tx4n\" (UniqueName: \"kubernetes.io/projected/1ac6df06-7bc0-4e8d-acb5-03205f191eef-kube-api-access-5tx4n\") pod \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\" (UID: \"1ac6df06-7bc0-4e8d-acb5-03205f191eef\") " Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.340667 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa84e94e-86c4-45f5-856d-55053bb099a8-logs" (OuterVolumeSpecName: "logs") pod "aa84e94e-86c4-45f5-856d-55053bb099a8" (UID: "aa84e94e-86c4-45f5-856d-55053bb099a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.341129 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa84e94e-86c4-45f5-856d-55053bb099a8-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.341638 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac6df06-7bc0-4e8d-acb5-03205f191eef-logs" (OuterVolumeSpecName: "logs") pod "1ac6df06-7bc0-4e8d-acb5-03205f191eef" (UID: "1ac6df06-7bc0-4e8d-acb5-03205f191eef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.345357 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac6df06-7bc0-4e8d-acb5-03205f191eef-kube-api-access-5tx4n" (OuterVolumeSpecName: "kube-api-access-5tx4n") pod "1ac6df06-7bc0-4e8d-acb5-03205f191eef" (UID: "1ac6df06-7bc0-4e8d-acb5-03205f191eef"). InnerVolumeSpecName "kube-api-access-5tx4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.351657 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa84e94e-86c4-45f5-856d-55053bb099a8-kube-api-access-r8g8x" (OuterVolumeSpecName: "kube-api-access-r8g8x") pod "aa84e94e-86c4-45f5-856d-55053bb099a8" (UID: "aa84e94e-86c4-45f5-856d-55053bb099a8"). InnerVolumeSpecName "kube-api-access-r8g8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.365525 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa84e94e-86c4-45f5-856d-55053bb099a8" (UID: "aa84e94e-86c4-45f5-856d-55053bb099a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.370093 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-config-data" (OuterVolumeSpecName: "config-data") pod "1ac6df06-7bc0-4e8d-acb5-03205f191eef" (UID: "1ac6df06-7bc0-4e8d-acb5-03205f191eef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.370668 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-config-data" (OuterVolumeSpecName: "config-data") pod "aa84e94e-86c4-45f5-856d-55053bb099a8" (UID: "aa84e94e-86c4-45f5-856d-55053bb099a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.377345 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ac6df06-7bc0-4e8d-acb5-03205f191eef" (UID: "1ac6df06-7bc0-4e8d-acb5-03205f191eef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.443275 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tx4n\" (UniqueName: \"kubernetes.io/projected/1ac6df06-7bc0-4e8d-acb5-03205f191eef-kube-api-access-5tx4n\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.443311 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.443321 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac6df06-7bc0-4e8d-acb5-03205f191eef-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.443332 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.443340 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac6df06-7bc0-4e8d-acb5-03205f191eef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.443349 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8g8x\" (UniqueName: \"kubernetes.io/projected/aa84e94e-86c4-45f5-856d-55053bb099a8-kube-api-access-r8g8x\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.443358 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa84e94e-86c4-45f5-856d-55053bb099a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.455494 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714","Type":"ContainerStarted","Data":"3937dce0f30503250c04023587ef92a3292ab7739c2bc77f4eeaa6660dc450f6"} Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.456408 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.459116 4909 generic.go:334] "Generic (PLEG): container finished" podID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerID="af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f" exitCode=0 Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.459148 4909 generic.go:334] "Generic (PLEG): container finished" podID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerID="8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a" exitCode=143 Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.459185 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa84e94e-86c4-45f5-856d-55053bb099a8","Type":"ContainerDied","Data":"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f"} Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.459207 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa84e94e-86c4-45f5-856d-55053bb099a8","Type":"ContainerDied","Data":"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a"} Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.459218 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa84e94e-86c4-45f5-856d-55053bb099a8","Type":"ContainerDied","Data":"f89727f6939d54f4ee5cb00fba461e6782c24565032d014b998c76efd19d511d"} Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.459237 4909 scope.go:117] "RemoveContainer" containerID="af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.459361 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.467672 4909 generic.go:334] "Generic (PLEG): container finished" podID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerID="6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a" exitCode=0 Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.467709 4909 generic.go:334] "Generic (PLEG): container finished" podID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerID="c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc" exitCode=143 Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.467739 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ac6df06-7bc0-4e8d-acb5-03205f191eef","Type":"ContainerDied","Data":"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a"} Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.467769 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ac6df06-7bc0-4e8d-acb5-03205f191eef","Type":"ContainerDied","Data":"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc"} Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.467789 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ac6df06-7bc0-4e8d-acb5-03205f191eef","Type":"ContainerDied","Data":"01facaebc045405b232770efe6847f138550158217c726f066f93d71b1518d1e"} Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.467858 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.475451 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.475434124 podStartE2EDuration="2.475434124s" podCreationTimestamp="2025-11-26 08:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:25.472081963 +0000 UTC m=+5397.618293129" watchObservedRunningTime="2025-11-26 08:30:25.475434124 +0000 UTC m=+5397.621645290" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.520292 4909 scope.go:117] "RemoveContainer" containerID="8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.534182 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.546289 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.561634 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.562077 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-log" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.575157 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-log" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.575235 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49eabb59-89f9-4fbd-8b96-7a7464bdaf30" containerName="nova-manage" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.575249 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="49eabb59-89f9-4fbd-8b96-7a7464bdaf30" containerName="nova-manage" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.575301 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-metadata" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.575315 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-metadata" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.575366 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-api" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.575375 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-api" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.578891 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-log" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.578922 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-log" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.579848 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-api" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.579914 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-log" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.579951 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" containerName="nova-api-log" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.579974 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" containerName="nova-metadata-metadata" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.580002 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="49eabb59-89f9-4fbd-8b96-7a7464bdaf30" containerName="nova-manage" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.583442 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.585984 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.586801 4909 scope.go:117] "RemoveContainer" containerID="af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.587420 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f\": container with ID starting with af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f not found: ID does not exist" containerID="af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.587526 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f"} err="failed to get container status \"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f\": rpc error: code = NotFound desc = could not find container \"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f\": container with ID starting with af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.587665 4909 scope.go:117] "RemoveContainer" containerID="8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.588137 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a\": container with ID starting with 8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a not found: ID does not exist" containerID="8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.588227 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a"} err="failed to get container status \"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a\": rpc error: code = NotFound desc = could not find container \"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a\": container with ID starting with 8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.588274 4909 scope.go:117] "RemoveContainer" containerID="af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.588667 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f"} err="failed to get container status \"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f\": rpc error: code = NotFound desc = could not find container \"af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f\": container with ID starting with af6d3080dcc1d139c67f7e43a4d652a5ccee911dfa94a317ecfa5ea2bd48a55f not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.588787 4909 scope.go:117] "RemoveContainer" containerID="8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.589423 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a"} err="failed to get container status \"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a\": rpc error: code = NotFound desc = could not find container \"8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a\": container with ID starting with 8a82dc555c930585cec47d412cd657bea9245370a23b2d094b21ab1f9eb4a84a not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.589458 4909 scope.go:117] "RemoveContainer" containerID="6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.613267 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.622796 4909 scope.go:117] "RemoveContainer" containerID="c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.628453 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.638749 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.641970 4909 scope.go:117] "RemoveContainer" containerID="6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.642347 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a\": container with ID starting with 6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a not found: ID does not exist" containerID="6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.642379 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a"} err="failed to get container status \"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a\": rpc error: code = NotFound desc = could not find container \"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a\": container with ID starting with 6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.642400 4909 scope.go:117] "RemoveContainer" containerID="c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc" Nov 26 08:30:25 crc kubenswrapper[4909]: E1126 08:30:25.642735 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc\": container with ID starting with c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc not found: ID does not exist" containerID="c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.642788 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc"} err="failed to get container status \"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc\": rpc error: code = NotFound desc = could not find container \"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc\": container with ID starting with c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.642818 4909 scope.go:117] "RemoveContainer" containerID="6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.643088 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a"} err="failed to get container status \"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a\": rpc error: code = NotFound desc = could not find container \"6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a\": container with ID starting with 6866fbc94469171d43aec7a42a34d8226b4c577f0975bedea337c106bd9e215a not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.643108 4909 scope.go:117] "RemoveContainer" containerID="c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.643339 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc"} err="failed to get container status \"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc\": rpc error: code = NotFound desc = could not find container \"c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc\": container with ID starting with c7fefc8f46601eca6c1bdbdad3e9570082a489b5672b80798f8578c6bd5d93cc not found: ID does not exist" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.646530 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f54923-f109-4a81-ba59-104f1326f9b8-logs\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.646657 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.646723 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-config-data\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.646751 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8sr\" (UniqueName: \"kubernetes.io/projected/d8f54923-f109-4a81-ba59-104f1326f9b8-kube-api-access-kl8sr\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.649082 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.651022 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.653432 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.659281 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.748556 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f54923-f109-4a81-ba59-104f1326f9b8-logs\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.748689 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-config-data\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.748719 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.748765 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8tr\" (UniqueName: \"kubernetes.io/projected/c00e3ecc-24e2-4487-8bab-10f80ad77802-kube-api-access-4p8tr\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.748790 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.748989 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-config-data\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.749037 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8sr\" (UniqueName: \"kubernetes.io/projected/d8f54923-f109-4a81-ba59-104f1326f9b8-kube-api-access-kl8sr\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.749066 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f54923-f109-4a81-ba59-104f1326f9b8-logs\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.749073 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c00e3ecc-24e2-4487-8bab-10f80ad77802-logs\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.753388 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-config-data\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.762312 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.768755 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8sr\" (UniqueName: \"kubernetes.io/projected/d8f54923-f109-4a81-ba59-104f1326f9b8-kube-api-access-kl8sr\") pod \"nova-metadata-0\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.850441 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c00e3ecc-24e2-4487-8bab-10f80ad77802-logs\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.850669 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-config-data\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.850707 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.850753 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8tr\" (UniqueName: \"kubernetes.io/projected/c00e3ecc-24e2-4487-8bab-10f80ad77802-kube-api-access-4p8tr\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.850969 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c00e3ecc-24e2-4487-8bab-10f80ad77802-logs\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.854166 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.854298 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-config-data\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.882304 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8tr\" (UniqueName: \"kubernetes.io/projected/c00e3ecc-24e2-4487-8bab-10f80ad77802-kube-api-access-4p8tr\") pod \"nova-api-0\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " pod="openstack/nova-api-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.915746 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:25 crc kubenswrapper[4909]: I1126 08:30:25.968689 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:26 crc kubenswrapper[4909]: I1126 08:30:26.446582 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:26 crc kubenswrapper[4909]: W1126 08:30:26.456259 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8f54923_f109_4a81_ba59_104f1326f9b8.slice/crio-77c73f2443009ecea6ca90037e2a947335f316326caa61ac701fcc82819c4b77 WatchSource:0}: Error finding container 77c73f2443009ecea6ca90037e2a947335f316326caa61ac701fcc82819c4b77: Status 404 returned error can't find the container with id 77c73f2443009ecea6ca90037e2a947335f316326caa61ac701fcc82819c4b77 Nov 26 08:30:26 crc kubenswrapper[4909]: I1126 08:30:26.485558 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8f54923-f109-4a81-ba59-104f1326f9b8","Type":"ContainerStarted","Data":"77c73f2443009ecea6ca90037e2a947335f316326caa61ac701fcc82819c4b77"} Nov 26 08:30:26 crc kubenswrapper[4909]: I1126 08:30:26.494645 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:26 crc kubenswrapper[4909]: I1126 08:30:26.508797 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac6df06-7bc0-4e8d-acb5-03205f191eef" path="/var/lib/kubelet/pods/1ac6df06-7bc0-4e8d-acb5-03205f191eef/volumes" Nov 26 08:30:26 crc kubenswrapper[4909]: I1126 08:30:26.509538 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa84e94e-86c4-45f5-856d-55053bb099a8" path="/var/lib/kubelet/pods/aa84e94e-86c4-45f5-856d-55053bb099a8/volumes" Nov 26 08:30:26 crc kubenswrapper[4909]: I1126 08:30:26.829148 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:26 crc kubenswrapper[4909]: I1126 08:30:26.840799 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.152540 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.224989 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85c649d7bf-5njdx"] Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.225245 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerName="dnsmasq-dns" containerID="cri-o://4274b0b75b1b87cf9aa5e6c486577bba9331ccfd3c40f7ecc98529a60ef0656c" gracePeriod=10 Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.502941 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c00e3ecc-24e2-4487-8bab-10f80ad77802","Type":"ContainerStarted","Data":"f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab"} Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.503018 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c00e3ecc-24e2-4487-8bab-10f80ad77802","Type":"ContainerStarted","Data":"31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599"} Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.503033 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c00e3ecc-24e2-4487-8bab-10f80ad77802","Type":"ContainerStarted","Data":"c4901358e6b47261c0c1e0ab38242b5576b5120c83b197398c29bafae76f1181"} Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.505483 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8f54923-f109-4a81-ba59-104f1326f9b8","Type":"ContainerStarted","Data":"192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4"} Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.505518 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8f54923-f109-4a81-ba59-104f1326f9b8","Type":"ContainerStarted","Data":"dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c"} Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.508997 4909 generic.go:334] "Generic (PLEG): container finished" podID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerID="4274b0b75b1b87cf9aa5e6c486577bba9331ccfd3c40f7ecc98529a60ef0656c" exitCode=0 Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.510278 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" event={"ID":"fd94cb09-3acf-4e20-9d66-060b1a2b17d4","Type":"ContainerDied","Data":"4274b0b75b1b87cf9aa5e6c486577bba9331ccfd3c40f7ecc98529a60ef0656c"} Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.517677 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.526961 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.52693917 podStartE2EDuration="2.52693917s" podCreationTimestamp="2025-11-26 08:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:27.521188743 +0000 UTC m=+5399.667399919" watchObservedRunningTime="2025-11-26 08:30:27.52693917 +0000 UTC m=+5399.673150336" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.568143 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5681252150000002 podStartE2EDuration="2.568125215s" podCreationTimestamp="2025-11-26 08:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:27.555300515 +0000 UTC m=+5399.701511681" watchObservedRunningTime="2025-11-26 08:30:27.568125215 +0000 UTC m=+5399.714336391" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.695574 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.790228 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-dns-svc\") pod \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.790570 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhs2b\" (UniqueName: \"kubernetes.io/projected/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-kube-api-access-dhs2b\") pod \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.790601 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-config\") pod \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.790655 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-nb\") pod \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.790724 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-sb\") pod \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\" (UID: \"fd94cb09-3acf-4e20-9d66-060b1a2b17d4\") " Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.795995 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-kube-api-access-dhs2b" (OuterVolumeSpecName: "kube-api-access-dhs2b") pod "fd94cb09-3acf-4e20-9d66-060b1a2b17d4" (UID: "fd94cb09-3acf-4e20-9d66-060b1a2b17d4"). InnerVolumeSpecName "kube-api-access-dhs2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.837447 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd94cb09-3acf-4e20-9d66-060b1a2b17d4" (UID: "fd94cb09-3acf-4e20-9d66-060b1a2b17d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.838692 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-config" (OuterVolumeSpecName: "config") pod "fd94cb09-3acf-4e20-9d66-060b1a2b17d4" (UID: "fd94cb09-3acf-4e20-9d66-060b1a2b17d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.840262 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd94cb09-3acf-4e20-9d66-060b1a2b17d4" (UID: "fd94cb09-3acf-4e20-9d66-060b1a2b17d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.853918 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd94cb09-3acf-4e20-9d66-060b1a2b17d4" (UID: "fd94cb09-3acf-4e20-9d66-060b1a2b17d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.892915 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.892948 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.892961 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhs2b\" (UniqueName: \"kubernetes.io/projected/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-kube-api-access-dhs2b\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.892978 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:27 crc kubenswrapper[4909]: I1126 08:30:27.892990 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd94cb09-3acf-4e20-9d66-060b1a2b17d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:28 crc kubenswrapper[4909]: I1126 08:30:28.531770 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" event={"ID":"fd94cb09-3acf-4e20-9d66-060b1a2b17d4","Type":"ContainerDied","Data":"0826497c84c2c5e877b1870593ba0ed29adf3abd761e4ebd1365b443d9857465"} Nov 26 08:30:28 crc kubenswrapper[4909]: I1126 08:30:28.531879 4909 scope.go:117] "RemoveContainer" containerID="4274b0b75b1b87cf9aa5e6c486577bba9331ccfd3c40f7ecc98529a60ef0656c" Nov 26 08:30:28 crc kubenswrapper[4909]: I1126 08:30:28.531901 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" Nov 26 08:30:28 crc kubenswrapper[4909]: I1126 08:30:28.567252 4909 scope.go:117] "RemoveContainer" containerID="f690d461f3f183835aac50b2a950d27a748c6b262124aa9cd74f786162de8468" Nov 26 08:30:28 crc kubenswrapper[4909]: I1126 08:30:28.589631 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85c649d7bf-5njdx"] Nov 26 08:30:28 crc kubenswrapper[4909]: I1126 08:30:28.601912 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85c649d7bf-5njdx"] Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.511202 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.532306 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-combined-ca-bundle\") pod \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.532399 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nlld\" (UniqueName: \"kubernetes.io/projected/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-kube-api-access-8nlld\") pod \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.533026 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-config-data\") pod \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\" (UID: \"7c0447bc-e6bf-436b-b47a-ebbec10e36e5\") " Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.539246 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-kube-api-access-8nlld" (OuterVolumeSpecName: "kube-api-access-8nlld") pod "7c0447bc-e6bf-436b-b47a-ebbec10e36e5" (UID: "7c0447bc-e6bf-436b-b47a-ebbec10e36e5"). InnerVolumeSpecName "kube-api-access-8nlld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.556688 4909 generic.go:334] "Generic (PLEG): container finished" podID="7c0447bc-e6bf-436b-b47a-ebbec10e36e5" containerID="3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5" exitCode=0 Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.556739 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c0447bc-e6bf-436b-b47a-ebbec10e36e5","Type":"ContainerDied","Data":"3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5"} Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.556769 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c0447bc-e6bf-436b-b47a-ebbec10e36e5","Type":"ContainerDied","Data":"ffa398c407d9466866efe4725ac3b7136e74ba288636dc64bbf306804c3c6bb1"} Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.556790 4909 scope.go:117] "RemoveContainer" containerID="3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.556878 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.566729 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-config-data" (OuterVolumeSpecName: "config-data") pod "7c0447bc-e6bf-436b-b47a-ebbec10e36e5" (UID: "7c0447bc-e6bf-436b-b47a-ebbec10e36e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.585925 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c0447bc-e6bf-436b-b47a-ebbec10e36e5" (UID: "7c0447bc-e6bf-436b-b47a-ebbec10e36e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.628715 4909 scope.go:117] "RemoveContainer" containerID="3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5" Nov 26 08:30:29 crc kubenswrapper[4909]: E1126 08:30:29.629212 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5\": container with ID starting with 3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5 not found: ID does not exist" containerID="3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.629257 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5"} err="failed to get container status \"3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5\": rpc error: code = NotFound desc = could not find container \"3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5\": container with ID starting with 3ccd4cead975a6a3fc9dffe20dc91749f88187c6e12d2ad57a45d221894dc5e5 not found: ID does not exist" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.635510 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.635573 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nlld\" (UniqueName: \"kubernetes.io/projected/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-kube-api-access-8nlld\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.635646 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0447bc-e6bf-436b-b47a-ebbec10e36e5-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.914661 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.936432 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.950141 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:29 crc kubenswrapper[4909]: E1126 08:30:29.950687 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerName="init" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.950710 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerName="init" Nov 26 08:30:29 crc kubenswrapper[4909]: E1126 08:30:29.950744 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerName="dnsmasq-dns" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.950752 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerName="dnsmasq-dns" Nov 26 08:30:29 crc kubenswrapper[4909]: E1126 08:30:29.950786 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0447bc-e6bf-436b-b47a-ebbec10e36e5" containerName="nova-scheduler-scheduler" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.950794 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0447bc-e6bf-436b-b47a-ebbec10e36e5" containerName="nova-scheduler-scheduler" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.951031 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0447bc-e6bf-436b-b47a-ebbec10e36e5" containerName="nova-scheduler-scheduler" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.951054 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerName="dnsmasq-dns" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.951871 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.954611 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 08:30:29 crc kubenswrapper[4909]: I1126 08:30:29.961820 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.045351 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.045440 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsz8t\" (UniqueName: \"kubernetes.io/projected/fdbfd5df-9c9d-4edc-a840-92817744b4d1-kube-api-access-wsz8t\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.045536 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-config-data\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.147164 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-config-data\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.147267 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.147311 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsz8t\" (UniqueName: \"kubernetes.io/projected/fdbfd5df-9c9d-4edc-a840-92817744b4d1-kube-api-access-wsz8t\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.152202 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.157338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-config-data\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.170210 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsz8t\" (UniqueName: \"kubernetes.io/projected/fdbfd5df-9c9d-4edc-a840-92817744b4d1-kube-api-access-wsz8t\") pod \"nova-scheduler-0\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.271357 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.514761 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0447bc-e6bf-436b-b47a-ebbec10e36e5" path="/var/lib/kubelet/pods/7c0447bc-e6bf-436b-b47a-ebbec10e36e5/volumes" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.516004 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" path="/var/lib/kubelet/pods/fd94cb09-3acf-4e20-9d66-060b1a2b17d4/volumes" Nov 26 08:30:30 crc kubenswrapper[4909]: W1126 08:30:30.808807 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdbfd5df_9c9d_4edc_a840_92817744b4d1.slice/crio-fa00a0615fd0c01e525691fd8c010e0dba72b7b2c31938ab56421224a291b472 WatchSource:0}: Error finding container fa00a0615fd0c01e525691fd8c010e0dba72b7b2c31938ab56421224a291b472: Status 404 returned error can't find the container with id fa00a0615fd0c01e525691fd8c010e0dba72b7b2c31938ab56421224a291b472 Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.811691 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.916302 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:30:30 crc kubenswrapper[4909]: I1126 08:30:30.916814 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:30:31 crc kubenswrapper[4909]: I1126 08:30:31.596477 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdbfd5df-9c9d-4edc-a840-92817744b4d1","Type":"ContainerStarted","Data":"9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e"} Nov 26 08:30:31 crc kubenswrapper[4909]: I1126 08:30:31.596906 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdbfd5df-9c9d-4edc-a840-92817744b4d1","Type":"ContainerStarted","Data":"fa00a0615fd0c01e525691fd8c010e0dba72b7b2c31938ab56421224a291b472"} Nov 26 08:30:31 crc kubenswrapper[4909]: I1126 08:30:31.630886 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.630856886 podStartE2EDuration="2.630856886s" podCreationTimestamp="2025-11-26 08:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:31.618269293 +0000 UTC m=+5403.764480479" watchObservedRunningTime="2025-11-26 08:30:31.630856886 +0000 UTC m=+5403.777068062" Nov 26 08:30:32 crc kubenswrapper[4909]: I1126 08:30:32.499635 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85c649d7bf-5njdx" podUID="fd94cb09-3acf-4e20-9d66-060b1a2b17d4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.62:5353: i/o timeout" Nov 26 08:30:33 crc kubenswrapper[4909]: I1126 08:30:33.891542 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.410112 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xh4vs"] Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.412140 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.415169 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.417106 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.431362 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh4vs"] Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.449110 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-config-data\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.449573 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-scripts\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.449667 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.449779 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pplhf\" (UniqueName: \"kubernetes.io/projected/27d633f2-8335-4ee5-be77-603a03d89a91-kube-api-access-pplhf\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.553295 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pplhf\" (UniqueName: \"kubernetes.io/projected/27d633f2-8335-4ee5-be77-603a03d89a91-kube-api-access-pplhf\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.553487 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-config-data\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.553617 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-scripts\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.553674 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.561442 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.568248 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-scripts\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.597209 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-config-data\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.597930 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pplhf\" (UniqueName: \"kubernetes.io/projected/27d633f2-8335-4ee5-be77-603a03d89a91-kube-api-access-pplhf\") pod \"nova-cell1-cell-mapping-xh4vs\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:34 crc kubenswrapper[4909]: I1126 08:30:34.756450 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.252515 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh4vs"] Nov 26 08:30:35 crc kubenswrapper[4909]: W1126 08:30:35.260471 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27d633f2_8335_4ee5_be77_603a03d89a91.slice/crio-b54e170122af47d01cfcac90af42a6efe445702e2a90306029e2b7ac695640c0 WatchSource:0}: Error finding container b54e170122af47d01cfcac90af42a6efe445702e2a90306029e2b7ac695640c0: Status 404 returned error can't find the container with id b54e170122af47d01cfcac90af42a6efe445702e2a90306029e2b7ac695640c0 Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.272222 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.641901 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh4vs" event={"ID":"27d633f2-8335-4ee5-be77-603a03d89a91","Type":"ContainerStarted","Data":"8c8190a1de854c52412c8e8a0336569605b98e73d49817e531e53eb1f6dd5af4"} Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.642430 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh4vs" event={"ID":"27d633f2-8335-4ee5-be77-603a03d89a91","Type":"ContainerStarted","Data":"b54e170122af47d01cfcac90af42a6efe445702e2a90306029e2b7ac695640c0"} Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.667281 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xh4vs" podStartSLOduration=1.6672606490000002 podStartE2EDuration="1.667260649s" podCreationTimestamp="2025-11-26 08:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:35.661775309 +0000 UTC m=+5407.807986475" watchObservedRunningTime="2025-11-26 08:30:35.667260649 +0000 UTC m=+5407.813471825" Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.918454 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.919791 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.970035 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 08:30:35 crc kubenswrapper[4909]: I1126 08:30:35.970149 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 08:30:36 crc kubenswrapper[4909]: I1126 08:30:36.998891 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:37 crc kubenswrapper[4909]: I1126 08:30:36.998875 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:37 crc kubenswrapper[4909]: I1126 08:30:37.039920 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.83:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:37 crc kubenswrapper[4909]: I1126 08:30:37.039951 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.83:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:37 crc kubenswrapper[4909]: I1126 08:30:37.300633 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:30:37 crc kubenswrapper[4909]: I1126 08:30:37.300694 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:30:38 crc kubenswrapper[4909]: I1126 08:30:38.302535 4909 scope.go:117] "RemoveContainer" containerID="3a8d6e779cb8f4e23370b18e7fb2a9c37bd5cdd17703f5439e9540db108a6783" Nov 26 08:30:40 crc kubenswrapper[4909]: I1126 08:30:40.272202 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 08:30:40 crc kubenswrapper[4909]: I1126 08:30:40.303256 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 08:30:40 crc kubenswrapper[4909]: I1126 08:30:40.700908 4909 generic.go:334] "Generic (PLEG): container finished" podID="27d633f2-8335-4ee5-be77-603a03d89a91" containerID="8c8190a1de854c52412c8e8a0336569605b98e73d49817e531e53eb1f6dd5af4" exitCode=0 Nov 26 08:30:40 crc kubenswrapper[4909]: I1126 08:30:40.701009 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh4vs" event={"ID":"27d633f2-8335-4ee5-be77-603a03d89a91","Type":"ContainerDied","Data":"8c8190a1de854c52412c8e8a0336569605b98e73d49817e531e53eb1f6dd5af4"} Nov 26 08:30:40 crc kubenswrapper[4909]: I1126 08:30:40.735793 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.066818 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.111576 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-config-data\") pod \"27d633f2-8335-4ee5-be77-603a03d89a91\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.111694 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-combined-ca-bundle\") pod \"27d633f2-8335-4ee5-be77-603a03d89a91\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.111827 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pplhf\" (UniqueName: \"kubernetes.io/projected/27d633f2-8335-4ee5-be77-603a03d89a91-kube-api-access-pplhf\") pod \"27d633f2-8335-4ee5-be77-603a03d89a91\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.112020 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-scripts\") pod \"27d633f2-8335-4ee5-be77-603a03d89a91\" (UID: \"27d633f2-8335-4ee5-be77-603a03d89a91\") " Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.117333 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-scripts" (OuterVolumeSpecName: "scripts") pod "27d633f2-8335-4ee5-be77-603a03d89a91" (UID: "27d633f2-8335-4ee5-be77-603a03d89a91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.117795 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d633f2-8335-4ee5-be77-603a03d89a91-kube-api-access-pplhf" (OuterVolumeSpecName: "kube-api-access-pplhf") pod "27d633f2-8335-4ee5-be77-603a03d89a91" (UID: "27d633f2-8335-4ee5-be77-603a03d89a91"). InnerVolumeSpecName "kube-api-access-pplhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.141434 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-config-data" (OuterVolumeSpecName: "config-data") pod "27d633f2-8335-4ee5-be77-603a03d89a91" (UID: "27d633f2-8335-4ee5-be77-603a03d89a91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.143956 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27d633f2-8335-4ee5-be77-603a03d89a91" (UID: "27d633f2-8335-4ee5-be77-603a03d89a91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.214162 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pplhf\" (UniqueName: \"kubernetes.io/projected/27d633f2-8335-4ee5-be77-603a03d89a91-kube-api-access-pplhf\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.214200 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.214210 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.214221 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d633f2-8335-4ee5-be77-603a03d89a91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.723949 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xh4vs" event={"ID":"27d633f2-8335-4ee5-be77-603a03d89a91","Type":"ContainerDied","Data":"b54e170122af47d01cfcac90af42a6efe445702e2a90306029e2b7ac695640c0"} Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.724016 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54e170122af47d01cfcac90af42a6efe445702e2a90306029e2b7ac695640c0" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.724070 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xh4vs" Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.927392 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.927744 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-log" containerID="cri-o://31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599" gracePeriod=30 Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.927848 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-api" containerID="cri-o://f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab" gracePeriod=30 Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.946833 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.947061 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fdbfd5df-9c9d-4edc-a840-92817744b4d1" containerName="nova-scheduler-scheduler" containerID="cri-o://9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" gracePeriod=30 Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.992457 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.992748 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-log" containerID="cri-o://dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c" gracePeriod=30 Nov 26 08:30:42 crc kubenswrapper[4909]: I1126 08:30:42.992845 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-metadata" containerID="cri-o://192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4" gracePeriod=30 Nov 26 08:30:43 crc kubenswrapper[4909]: I1126 08:30:43.741250 4909 generic.go:334] "Generic (PLEG): container finished" podID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerID="31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599" exitCode=143 Nov 26 08:30:43 crc kubenswrapper[4909]: I1126 08:30:43.741293 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c00e3ecc-24e2-4487-8bab-10f80ad77802","Type":"ContainerDied","Data":"31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599"} Nov 26 08:30:43 crc kubenswrapper[4909]: I1126 08:30:43.744618 4909 generic.go:334] "Generic (PLEG): container finished" podID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerID="dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c" exitCode=143 Nov 26 08:30:43 crc kubenswrapper[4909]: I1126 08:30:43.744682 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8f54923-f109-4a81-ba59-104f1326f9b8","Type":"ContainerDied","Data":"dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c"} Nov 26 08:30:45 crc kubenswrapper[4909]: E1126 08:30:45.275548 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 08:30:45 crc kubenswrapper[4909]: E1126 08:30:45.278182 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 08:30:45 crc kubenswrapper[4909]: E1126 08:30:45.281834 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 08:30:45 crc kubenswrapper[4909]: E1126 08:30:45.281898 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fdbfd5df-9c9d-4edc-a840-92817744b4d1" containerName="nova-scheduler-scheduler" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.607885 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.613041 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.780741 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.780927 4909 generic.go:334] "Generic (PLEG): container finished" podID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerID="192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4" exitCode=0 Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.781045 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8f54923-f109-4a81-ba59-104f1326f9b8","Type":"ContainerDied","Data":"192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4"} Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.781101 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8f54923-f109-4a81-ba59-104f1326f9b8","Type":"ContainerDied","Data":"77c73f2443009ecea6ca90037e2a947335f316326caa61ac701fcc82819c4b77"} Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.781121 4909 scope.go:117] "RemoveContainer" containerID="192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.784399 4909 generic.go:334] "Generic (PLEG): container finished" podID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerID="f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab" exitCode=0 Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.784434 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.784454 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c00e3ecc-24e2-4487-8bab-10f80ad77802","Type":"ContainerDied","Data":"f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab"} Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.784568 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c00e3ecc-24e2-4487-8bab-10f80ad77802","Type":"ContainerDied","Data":"c4901358e6b47261c0c1e0ab38242b5576b5120c83b197398c29bafae76f1181"} Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800117 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-config-data\") pod \"c00e3ecc-24e2-4487-8bab-10f80ad77802\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800237 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c00e3ecc-24e2-4487-8bab-10f80ad77802-logs\") pod \"c00e3ecc-24e2-4487-8bab-10f80ad77802\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800261 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-combined-ca-bundle\") pod \"d8f54923-f109-4a81-ba59-104f1326f9b8\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800328 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl8sr\" (UniqueName: \"kubernetes.io/projected/d8f54923-f109-4a81-ba59-104f1326f9b8-kube-api-access-kl8sr\") pod \"d8f54923-f109-4a81-ba59-104f1326f9b8\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800359 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f54923-f109-4a81-ba59-104f1326f9b8-logs\") pod \"d8f54923-f109-4a81-ba59-104f1326f9b8\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800396 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-config-data\") pod \"d8f54923-f109-4a81-ba59-104f1326f9b8\" (UID: \"d8f54923-f109-4a81-ba59-104f1326f9b8\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800514 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p8tr\" (UniqueName: \"kubernetes.io/projected/c00e3ecc-24e2-4487-8bab-10f80ad77802-kube-api-access-4p8tr\") pod \"c00e3ecc-24e2-4487-8bab-10f80ad77802\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.800536 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-combined-ca-bundle\") pod \"c00e3ecc-24e2-4487-8bab-10f80ad77802\" (UID: \"c00e3ecc-24e2-4487-8bab-10f80ad77802\") " Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.801372 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00e3ecc-24e2-4487-8bab-10f80ad77802-logs" (OuterVolumeSpecName: "logs") pod "c00e3ecc-24e2-4487-8bab-10f80ad77802" (UID: "c00e3ecc-24e2-4487-8bab-10f80ad77802"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.802023 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f54923-f109-4a81-ba59-104f1326f9b8-logs" (OuterVolumeSpecName: "logs") pod "d8f54923-f109-4a81-ba59-104f1326f9b8" (UID: "d8f54923-f109-4a81-ba59-104f1326f9b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.807287 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f54923-f109-4a81-ba59-104f1326f9b8-kube-api-access-kl8sr" (OuterVolumeSpecName: "kube-api-access-kl8sr") pod "d8f54923-f109-4a81-ba59-104f1326f9b8" (UID: "d8f54923-f109-4a81-ba59-104f1326f9b8"). InnerVolumeSpecName "kube-api-access-kl8sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.807832 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00e3ecc-24e2-4487-8bab-10f80ad77802-kube-api-access-4p8tr" (OuterVolumeSpecName: "kube-api-access-4p8tr") pod "c00e3ecc-24e2-4487-8bab-10f80ad77802" (UID: "c00e3ecc-24e2-4487-8bab-10f80ad77802"). InnerVolumeSpecName "kube-api-access-4p8tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.811630 4909 scope.go:117] "RemoveContainer" containerID="dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.835522 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8f54923-f109-4a81-ba59-104f1326f9b8" (UID: "d8f54923-f109-4a81-ba59-104f1326f9b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.842285 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-config-data" (OuterVolumeSpecName: "config-data") pod "d8f54923-f109-4a81-ba59-104f1326f9b8" (UID: "d8f54923-f109-4a81-ba59-104f1326f9b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.844136 4909 scope.go:117] "RemoveContainer" containerID="192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4" Nov 26 08:30:46 crc kubenswrapper[4909]: E1126 08:30:46.844947 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4\": container with ID starting with 192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4 not found: ID does not exist" containerID="192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.844983 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4"} err="failed to get container status \"192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4\": rpc error: code = NotFound desc = could not find container \"192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4\": container with ID starting with 192394fc73d52584b273bfb4bff71017e9bab9d01dc497780369f7e8b7ac29c4 not found: ID does not exist" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.845009 4909 scope.go:117] "RemoveContainer" containerID="dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c" Nov 26 08:30:46 crc kubenswrapper[4909]: E1126 08:30:46.845685 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c\": container with ID starting with dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c not found: ID does not exist" containerID="dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.845742 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c"} err="failed to get container status \"dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c\": rpc error: code = NotFound desc = could not find container \"dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c\": container with ID starting with dd574716ac5b8c0bb54a9e2b59d7c76ca0a9945120985de938e6f9878595d73c not found: ID does not exist" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.845774 4909 scope.go:117] "RemoveContainer" containerID="f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.846715 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c00e3ecc-24e2-4487-8bab-10f80ad77802" (UID: "c00e3ecc-24e2-4487-8bab-10f80ad77802"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.849125 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-config-data" (OuterVolumeSpecName: "config-data") pod "c00e3ecc-24e2-4487-8bab-10f80ad77802" (UID: "c00e3ecc-24e2-4487-8bab-10f80ad77802"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.867236 4909 scope.go:117] "RemoveContainer" containerID="31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.887034 4909 scope.go:117] "RemoveContainer" containerID="f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab" Nov 26 08:30:46 crc kubenswrapper[4909]: E1126 08:30:46.887559 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab\": container with ID starting with f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab not found: ID does not exist" containerID="f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.887633 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab"} err="failed to get container status \"f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab\": rpc error: code = NotFound desc = could not find container \"f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab\": container with ID starting with f3b214091941368f5094df80eba12884d6aad487df44e235d1bea1e7d327f4ab not found: ID does not exist" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.887658 4909 scope.go:117] "RemoveContainer" containerID="31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599" Nov 26 08:30:46 crc kubenswrapper[4909]: E1126 08:30:46.887955 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599\": container with ID starting with 31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599 not found: ID does not exist" containerID="31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.888006 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599"} err="failed to get container status \"31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599\": rpc error: code = NotFound desc = could not find container \"31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599\": container with ID starting with 31a270828379cdfbc6c419fe9647f07a177941ae0a35c1c6dbfa0a5e0efec599 not found: ID does not exist" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902680 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p8tr\" (UniqueName: \"kubernetes.io/projected/c00e3ecc-24e2-4487-8bab-10f80ad77802-kube-api-access-4p8tr\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902718 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902732 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00e3ecc-24e2-4487-8bab-10f80ad77802-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902744 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c00e3ecc-24e2-4487-8bab-10f80ad77802-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902756 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902768 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl8sr\" (UniqueName: \"kubernetes.io/projected/d8f54923-f109-4a81-ba59-104f1326f9b8-kube-api-access-kl8sr\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902779 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f54923-f109-4a81-ba59-104f1326f9b8-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:46 crc kubenswrapper[4909]: I1126 08:30:46.902791 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f54923-f109-4a81-ba59-104f1326f9b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.155123 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.166273 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.193169 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.223244 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.232827 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: E1126 08:30:47.233536 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d633f2-8335-4ee5-be77-603a03d89a91" containerName="nova-manage" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233576 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d633f2-8335-4ee5-be77-603a03d89a91" containerName="nova-manage" Nov 26 08:30:47 crc kubenswrapper[4909]: E1126 08:30:47.233631 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-log" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233643 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-log" Nov 26 08:30:47 crc kubenswrapper[4909]: E1126 08:30:47.233668 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-api" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233677 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-api" Nov 26 08:30:47 crc kubenswrapper[4909]: E1126 08:30:47.233707 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-metadata" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233719 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-metadata" Nov 26 08:30:47 crc kubenswrapper[4909]: E1126 08:30:47.233744 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-log" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233754 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-log" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233959 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-log" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233977 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-api" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.233996 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d633f2-8335-4ee5-be77-603a03d89a91" containerName="nova-manage" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.234011 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" containerName="nova-api-log" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.234022 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" containerName="nova-metadata-metadata" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.235401 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.238216 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.244114 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.247884 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.252521 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.255382 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.282802 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.310821 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8vgq\" (UniqueName: \"kubernetes.io/projected/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-kube-api-access-k8vgq\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.310884 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfgkp\" (UniqueName: \"kubernetes.io/projected/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-kube-api-access-vfgkp\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.311025 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-logs\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.311091 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.311122 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-logs\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.311165 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.311464 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-config-data\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.311509 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-config-data\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412619 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8vgq\" (UniqueName: \"kubernetes.io/projected/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-kube-api-access-k8vgq\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412656 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfgkp\" (UniqueName: \"kubernetes.io/projected/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-kube-api-access-vfgkp\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412715 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-logs\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412734 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412770 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-logs\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412794 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412875 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-config-data\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.412892 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-config-data\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.413468 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-logs\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.413637 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-logs\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.417125 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-config-data\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.418660 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.419195 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-config-data\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.420148 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.429118 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8vgq\" (UniqueName: \"kubernetes.io/projected/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-kube-api-access-k8vgq\") pod \"nova-api-0\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.440745 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfgkp\" (UniqueName: \"kubernetes.io/projected/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-kube-api-access-vfgkp\") pod \"nova-metadata-0\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.524289 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.560005 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.575911 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.628446 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-combined-ca-bundle\") pod \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.628642 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-config-data\") pod \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.628700 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsz8t\" (UniqueName: \"kubernetes.io/projected/fdbfd5df-9c9d-4edc-a840-92817744b4d1-kube-api-access-wsz8t\") pod \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\" (UID: \"fdbfd5df-9c9d-4edc-a840-92817744b4d1\") " Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.636440 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdbfd5df-9c9d-4edc-a840-92817744b4d1-kube-api-access-wsz8t" (OuterVolumeSpecName: "kube-api-access-wsz8t") pod "fdbfd5df-9c9d-4edc-a840-92817744b4d1" (UID: "fdbfd5df-9c9d-4edc-a840-92817744b4d1"). InnerVolumeSpecName "kube-api-access-wsz8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.660532 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdbfd5df-9c9d-4edc-a840-92817744b4d1" (UID: "fdbfd5df-9c9d-4edc-a840-92817744b4d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.674174 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-config-data" (OuterVolumeSpecName: "config-data") pod "fdbfd5df-9c9d-4edc-a840-92817744b4d1" (UID: "fdbfd5df-9c9d-4edc-a840-92817744b4d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.731011 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.731038 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbfd5df-9c9d-4edc-a840-92817744b4d1-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.731049 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsz8t\" (UniqueName: \"kubernetes.io/projected/fdbfd5df-9c9d-4edc-a840-92817744b4d1-kube-api-access-wsz8t\") on node \"crc\" DevicePath \"\"" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.807411 4909 generic.go:334] "Generic (PLEG): container finished" podID="fdbfd5df-9c9d-4edc-a840-92817744b4d1" containerID="9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" exitCode=0 Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.807527 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdbfd5df-9c9d-4edc-a840-92817744b4d1","Type":"ContainerDied","Data":"9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e"} Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.807619 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdbfd5df-9c9d-4edc-a840-92817744b4d1","Type":"ContainerDied","Data":"fa00a0615fd0c01e525691fd8c010e0dba72b7b2c31938ab56421224a291b472"} Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.807634 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.807650 4909 scope.go:117] "RemoveContainer" containerID="9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.859428 4909 scope.go:117] "RemoveContainer" containerID="9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" Nov 26 08:30:47 crc kubenswrapper[4909]: E1126 08:30:47.859905 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e\": container with ID starting with 9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e not found: ID does not exist" containerID="9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.859936 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e"} err="failed to get container status \"9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e\": rpc error: code = NotFound desc = could not find container \"9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e\": container with ID starting with 9ee746241fd9c14496c371a70a36f10038858d5b8c1b369343e3eab5445ab13e not found: ID does not exist" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.864350 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.879024 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.900451 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: E1126 08:30:47.901123 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbfd5df-9c9d-4edc-a840-92817744b4d1" containerName="nova-scheduler-scheduler" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.901142 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbfd5df-9c9d-4edc-a840-92817744b4d1" containerName="nova-scheduler-scheduler" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.901338 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbfd5df-9c9d-4edc-a840-92817744b4d1" containerName="nova-scheduler-scheduler" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.902088 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.904559 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.919244 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:47 crc kubenswrapper[4909]: I1126 08:30:47.933483 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.036457 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkw87\" (UniqueName: \"kubernetes.io/projected/0765164d-a12f-4917-a71c-c909a27d4ba6-kube-api-access-hkw87\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.036585 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-config-data\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.036654 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.138834 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkw87\" (UniqueName: \"kubernetes.io/projected/0765164d-a12f-4917-a71c-c909a27d4ba6-kube-api-access-hkw87\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.139166 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-config-data\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.139198 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.145833 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-config-data\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.149806 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.156034 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkw87\" (UniqueName: \"kubernetes.io/projected/0765164d-a12f-4917-a71c-c909a27d4ba6-kube-api-access-hkw87\") pod \"nova-scheduler-0\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.166114 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:30:48 crc kubenswrapper[4909]: W1126 08:30:48.173863 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode48ec2ae_d1ce_4d56_90f5_2b46b76d8d32.slice/crio-e46a09fa770cb60bfd2d7fb11d801d1ac575ed645548b7c8e54b296a20ef2d53 WatchSource:0}: Error finding container e46a09fa770cb60bfd2d7fb11d801d1ac575ed645548b7c8e54b296a20ef2d53: Status 404 returned error can't find the container with id e46a09fa770cb60bfd2d7fb11d801d1ac575ed645548b7c8e54b296a20ef2d53 Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.226317 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.511895 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00e3ecc-24e2-4487-8bab-10f80ad77802" path="/var/lib/kubelet/pods/c00e3ecc-24e2-4487-8bab-10f80ad77802/volumes" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.515825 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f54923-f109-4a81-ba59-104f1326f9b8" path="/var/lib/kubelet/pods/d8f54923-f109-4a81-ba59-104f1326f9b8/volumes" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.516603 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdbfd5df-9c9d-4edc-a840-92817744b4d1" path="/var/lib/kubelet/pods/fdbfd5df-9c9d-4edc-a840-92817744b4d1/volumes" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.668734 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:30:48 crc kubenswrapper[4909]: W1126 08:30:48.671330 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0765164d_a12f_4917_a71c_c909a27d4ba6.slice/crio-108b766365f0b11c8f1538c22c5e2de3836136b7fa16e245a140f6ef0ad4f34c WatchSource:0}: Error finding container 108b766365f0b11c8f1538c22c5e2de3836136b7fa16e245a140f6ef0ad4f34c: Status 404 returned error can't find the container with id 108b766365f0b11c8f1538c22c5e2de3836136b7fa16e245a140f6ef0ad4f34c Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.843129 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32","Type":"ContainerStarted","Data":"0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953"} Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.843428 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32","Type":"ContainerStarted","Data":"e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c"} Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.843444 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32","Type":"ContainerStarted","Data":"e46a09fa770cb60bfd2d7fb11d801d1ac575ed645548b7c8e54b296a20ef2d53"} Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.854300 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0765164d-a12f-4917-a71c-c909a27d4ba6","Type":"ContainerStarted","Data":"108b766365f0b11c8f1538c22c5e2de3836136b7fa16e245a140f6ef0ad4f34c"} Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.871811 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658a98a6-1dad-44ac-b9b3-1e4cf09ca063","Type":"ContainerStarted","Data":"694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026"} Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.871865 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658a98a6-1dad-44ac-b9b3-1e4cf09ca063","Type":"ContainerStarted","Data":"b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186"} Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.871880 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658a98a6-1dad-44ac-b9b3-1e4cf09ca063","Type":"ContainerStarted","Data":"be08e51ebeb8fee184100de19bce7a07c30b6a9112b26fa2bf19edab51931ebf"} Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.873361 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8733389059999999 podStartE2EDuration="1.873338906s" podCreationTimestamp="2025-11-26 08:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:48.865032059 +0000 UTC m=+5421.011243225" watchObservedRunningTime="2025-11-26 08:30:48.873338906 +0000 UTC m=+5421.019550072" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.885582 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.8855657510000001 podStartE2EDuration="1.885565751s" podCreationTimestamp="2025-11-26 08:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:48.884864241 +0000 UTC m=+5421.031075407" watchObservedRunningTime="2025-11-26 08:30:48.885565751 +0000 UTC m=+5421.031776917" Nov 26 08:30:48 crc kubenswrapper[4909]: I1126 08:30:48.903976 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.903958542 podStartE2EDuration="1.903958542s" podCreationTimestamp="2025-11-26 08:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:30:48.903111379 +0000 UTC m=+5421.049322555" watchObservedRunningTime="2025-11-26 08:30:48.903958542 +0000 UTC m=+5421.050169708" Nov 26 08:30:49 crc kubenswrapper[4909]: I1126 08:30:49.892414 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0765164d-a12f-4917-a71c-c909a27d4ba6","Type":"ContainerStarted","Data":"d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba"} Nov 26 08:30:52 crc kubenswrapper[4909]: I1126 08:30:52.576209 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:30:52 crc kubenswrapper[4909]: I1126 08:30:52.576679 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:30:53 crc kubenswrapper[4909]: I1126 08:30:53.226539 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 08:30:57 crc kubenswrapper[4909]: I1126 08:30:57.560954 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 08:30:57 crc kubenswrapper[4909]: I1126 08:30:57.561573 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 08:30:57 crc kubenswrapper[4909]: I1126 08:30:57.576000 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 08:30:57 crc kubenswrapper[4909]: I1126 08:30:57.576416 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 08:30:58 crc kubenswrapper[4909]: I1126 08:30:58.226638 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 08:30:58 crc kubenswrapper[4909]: I1126 08:30:58.252903 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 08:30:58 crc kubenswrapper[4909]: I1126 08:30:58.725866 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:58 crc kubenswrapper[4909]: I1126 08:30:58.725926 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:58 crc kubenswrapper[4909]: I1126 08:30:58.726028 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:58 crc kubenswrapper[4909]: I1126 08:30:58.726334 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:30:59 crc kubenswrapper[4909]: I1126 08:30:59.033138 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.300877 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.301405 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.568505 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.569622 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.571507 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.575083 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.579973 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.581233 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 08:31:07 crc kubenswrapper[4909]: I1126 08:31:07.583038 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.115342 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.116989 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.118805 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.373427 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85c7886d8f-sk5nt"] Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.374952 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.386342 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85c7886d8f-sk5nt"] Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.448177 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-dns-svc\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.448263 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdq9\" (UniqueName: \"kubernetes.io/projected/7e92ea5e-40a4-4095-a688-88a290d337d3-kube-api-access-fhdq9\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.448288 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-sb\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.448304 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-config\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.448333 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-nb\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.549675 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhdq9\" (UniqueName: \"kubernetes.io/projected/7e92ea5e-40a4-4095-a688-88a290d337d3-kube-api-access-fhdq9\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.550123 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-config\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.550217 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-sb\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.550309 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-nb\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.550451 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-dns-svc\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.551509 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-config\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.551643 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-nb\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.551842 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-sb\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.552201 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-dns-svc\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.582328 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhdq9\" (UniqueName: \"kubernetes.io/projected/7e92ea5e-40a4-4095-a688-88a290d337d3-kube-api-access-fhdq9\") pod \"dnsmasq-dns-85c7886d8f-sk5nt\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:08 crc kubenswrapper[4909]: I1126 08:31:08.707044 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:09 crc kubenswrapper[4909]: I1126 08:31:09.219387 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85c7886d8f-sk5nt"] Nov 26 08:31:10 crc kubenswrapper[4909]: I1126 08:31:10.135606 4909 generic.go:334] "Generic (PLEG): container finished" podID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerID="35c59070c133e6c217341830cda91c0170538066bbc2f014e585c6fede4cec5e" exitCode=0 Nov 26 08:31:10 crc kubenswrapper[4909]: I1126 08:31:10.135777 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" event={"ID":"7e92ea5e-40a4-4095-a688-88a290d337d3","Type":"ContainerDied","Data":"35c59070c133e6c217341830cda91c0170538066bbc2f014e585c6fede4cec5e"} Nov 26 08:31:10 crc kubenswrapper[4909]: I1126 08:31:10.139054 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" event={"ID":"7e92ea5e-40a4-4095-a688-88a290d337d3","Type":"ContainerStarted","Data":"9deae93226182a6ceb0c3f3f506264d654a519abf9d226ce5f75eefa27c2719f"} Nov 26 08:31:11 crc kubenswrapper[4909]: I1126 08:31:11.150557 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" event={"ID":"7e92ea5e-40a4-4095-a688-88a290d337d3","Type":"ContainerStarted","Data":"495f61a68ab35e62a60a9cd3656c700e924c84757de94ca4bdd3896b0cc539ba"} Nov 26 08:31:11 crc kubenswrapper[4909]: I1126 08:31:11.150781 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:11 crc kubenswrapper[4909]: I1126 08:31:11.182310 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" podStartSLOduration=3.182283186 podStartE2EDuration="3.182283186s" podCreationTimestamp="2025-11-26 08:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:31:11.176474888 +0000 UTC m=+5443.322686084" watchObservedRunningTime="2025-11-26 08:31:11.182283186 +0000 UTC m=+5443.328494392" Nov 26 08:31:18 crc kubenswrapper[4909]: I1126 08:31:18.709917 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:18 crc kubenswrapper[4909]: I1126 08:31:18.813793 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69dc7db885-gzxxg"] Nov 26 08:31:18 crc kubenswrapper[4909]: I1126 08:31:18.814382 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" podUID="dc91e68f-682c-4264-bba3-20021adc7996" containerName="dnsmasq-dns" containerID="cri-o://04fabe8516c77486a4e7fb80559f8abc7ed90f75ebbe157c43c7895ab2544082" gracePeriod=10 Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.239989 4909 generic.go:334] "Generic (PLEG): container finished" podID="dc91e68f-682c-4264-bba3-20021adc7996" containerID="04fabe8516c77486a4e7fb80559f8abc7ed90f75ebbe157c43c7895ab2544082" exitCode=0 Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.240334 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" event={"ID":"dc91e68f-682c-4264-bba3-20021adc7996","Type":"ContainerDied","Data":"04fabe8516c77486a4e7fb80559f8abc7ed90f75ebbe157c43c7895ab2544082"} Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.240421 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" event={"ID":"dc91e68f-682c-4264-bba3-20021adc7996","Type":"ContainerDied","Data":"03f9e03eccd814fd50f4d5246a3a4876603ed27a069d59bb9178f89cefd2b27b"} Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.240436 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f9e03eccd814fd50f4d5246a3a4876603ed27a069d59bb9178f89cefd2b27b" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.288322 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.375099 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-dns-svc\") pod \"dc91e68f-682c-4264-bba3-20021adc7996\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.375193 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-sb\") pod \"dc91e68f-682c-4264-bba3-20021adc7996\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.375313 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57lh9\" (UniqueName: \"kubernetes.io/projected/dc91e68f-682c-4264-bba3-20021adc7996-kube-api-access-57lh9\") pod \"dc91e68f-682c-4264-bba3-20021adc7996\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.376886 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-nb\") pod \"dc91e68f-682c-4264-bba3-20021adc7996\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.376922 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-config\") pod \"dc91e68f-682c-4264-bba3-20021adc7996\" (UID: \"dc91e68f-682c-4264-bba3-20021adc7996\") " Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.380823 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc91e68f-682c-4264-bba3-20021adc7996-kube-api-access-57lh9" (OuterVolumeSpecName: "kube-api-access-57lh9") pod "dc91e68f-682c-4264-bba3-20021adc7996" (UID: "dc91e68f-682c-4264-bba3-20021adc7996"). InnerVolumeSpecName "kube-api-access-57lh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.418316 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc91e68f-682c-4264-bba3-20021adc7996" (UID: "dc91e68f-682c-4264-bba3-20021adc7996"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.424995 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc91e68f-682c-4264-bba3-20021adc7996" (UID: "dc91e68f-682c-4264-bba3-20021adc7996"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.428836 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc91e68f-682c-4264-bba3-20021adc7996" (UID: "dc91e68f-682c-4264-bba3-20021adc7996"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.435124 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-config" (OuterVolumeSpecName: "config") pod "dc91e68f-682c-4264-bba3-20021adc7996" (UID: "dc91e68f-682c-4264-bba3-20021adc7996"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.478613 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.478658 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57lh9\" (UniqueName: \"kubernetes.io/projected/dc91e68f-682c-4264-bba3-20021adc7996-kube-api-access-57lh9\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.478671 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.478689 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:19 crc kubenswrapper[4909]: I1126 08:31:19.478700 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc91e68f-682c-4264-bba3-20021adc7996-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.252247 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69dc7db885-gzxxg" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.299763 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69dc7db885-gzxxg"] Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.310640 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69dc7db885-gzxxg"] Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.513771 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc91e68f-682c-4264-bba3-20021adc7996" path="/var/lib/kubelet/pods/dc91e68f-682c-4264-bba3-20021adc7996/volumes" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.927284 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gsl8q"] Nov 26 08:31:20 crc kubenswrapper[4909]: E1126 08:31:20.927791 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc91e68f-682c-4264-bba3-20021adc7996" containerName="init" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.927823 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc91e68f-682c-4264-bba3-20021adc7996" containerName="init" Nov 26 08:31:20 crc kubenswrapper[4909]: E1126 08:31:20.927857 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc91e68f-682c-4264-bba3-20021adc7996" containerName="dnsmasq-dns" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.927869 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc91e68f-682c-4264-bba3-20021adc7996" containerName="dnsmasq-dns" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.928193 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc91e68f-682c-4264-bba3-20021adc7996" containerName="dnsmasq-dns" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.929229 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gsl8q" Nov 26 08:31:20 crc kubenswrapper[4909]: I1126 08:31:20.938784 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gsl8q"] Nov 26 08:31:21 crc kubenswrapper[4909]: I1126 08:31:21.008375 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w62tt\" (UniqueName: \"kubernetes.io/projected/36ea1c00-0636-4f6b-982d-a5fc0ca216be-kube-api-access-w62tt\") pod \"cinder-db-create-gsl8q\" (UID: \"36ea1c00-0636-4f6b-982d-a5fc0ca216be\") " pod="openstack/cinder-db-create-gsl8q" Nov 26 08:31:21 crc kubenswrapper[4909]: I1126 08:31:21.110525 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w62tt\" (UniqueName: \"kubernetes.io/projected/36ea1c00-0636-4f6b-982d-a5fc0ca216be-kube-api-access-w62tt\") pod \"cinder-db-create-gsl8q\" (UID: \"36ea1c00-0636-4f6b-982d-a5fc0ca216be\") " pod="openstack/cinder-db-create-gsl8q" Nov 26 08:31:21 crc kubenswrapper[4909]: I1126 08:31:21.145436 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w62tt\" (UniqueName: \"kubernetes.io/projected/36ea1c00-0636-4f6b-982d-a5fc0ca216be-kube-api-access-w62tt\") pod \"cinder-db-create-gsl8q\" (UID: \"36ea1c00-0636-4f6b-982d-a5fc0ca216be\") " pod="openstack/cinder-db-create-gsl8q" Nov 26 08:31:21 crc kubenswrapper[4909]: I1126 08:31:21.247007 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gsl8q" Nov 26 08:31:21 crc kubenswrapper[4909]: I1126 08:31:21.772965 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gsl8q"] Nov 26 08:31:21 crc kubenswrapper[4909]: W1126 08:31:21.791446 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36ea1c00_0636_4f6b_982d_a5fc0ca216be.slice/crio-e7a100d227c95c3520942b5753486c050583e29c691708f3c33b4796300b558e WatchSource:0}: Error finding container e7a100d227c95c3520942b5753486c050583e29c691708f3c33b4796300b558e: Status 404 returned error can't find the container with id e7a100d227c95c3520942b5753486c050583e29c691708f3c33b4796300b558e Nov 26 08:31:22 crc kubenswrapper[4909]: I1126 08:31:22.292122 4909 generic.go:334] "Generic (PLEG): container finished" podID="36ea1c00-0636-4f6b-982d-a5fc0ca216be" containerID="982b5f724b47ec9cc5a95d59f5a34c5c7a8f90869ea6645cbdf86e933ac70a10" exitCode=0 Nov 26 08:31:22 crc kubenswrapper[4909]: I1126 08:31:22.292486 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gsl8q" event={"ID":"36ea1c00-0636-4f6b-982d-a5fc0ca216be","Type":"ContainerDied","Data":"982b5f724b47ec9cc5a95d59f5a34c5c7a8f90869ea6645cbdf86e933ac70a10"} Nov 26 08:31:22 crc kubenswrapper[4909]: I1126 08:31:22.292539 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gsl8q" event={"ID":"36ea1c00-0636-4f6b-982d-a5fc0ca216be","Type":"ContainerStarted","Data":"e7a100d227c95c3520942b5753486c050583e29c691708f3c33b4796300b558e"} Nov 26 08:31:23 crc kubenswrapper[4909]: I1126 08:31:23.677542 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gsl8q" Nov 26 08:31:23 crc kubenswrapper[4909]: I1126 08:31:23.759569 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w62tt\" (UniqueName: \"kubernetes.io/projected/36ea1c00-0636-4f6b-982d-a5fc0ca216be-kube-api-access-w62tt\") pod \"36ea1c00-0636-4f6b-982d-a5fc0ca216be\" (UID: \"36ea1c00-0636-4f6b-982d-a5fc0ca216be\") " Nov 26 08:31:23 crc kubenswrapper[4909]: I1126 08:31:23.764822 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ea1c00-0636-4f6b-982d-a5fc0ca216be-kube-api-access-w62tt" (OuterVolumeSpecName: "kube-api-access-w62tt") pod "36ea1c00-0636-4f6b-982d-a5fc0ca216be" (UID: "36ea1c00-0636-4f6b-982d-a5fc0ca216be"). InnerVolumeSpecName "kube-api-access-w62tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:23 crc kubenswrapper[4909]: I1126 08:31:23.861877 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w62tt\" (UniqueName: \"kubernetes.io/projected/36ea1c00-0636-4f6b-982d-a5fc0ca216be-kube-api-access-w62tt\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:24 crc kubenswrapper[4909]: I1126 08:31:24.310355 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gsl8q" event={"ID":"36ea1c00-0636-4f6b-982d-a5fc0ca216be","Type":"ContainerDied","Data":"e7a100d227c95c3520942b5753486c050583e29c691708f3c33b4796300b558e"} Nov 26 08:31:24 crc kubenswrapper[4909]: I1126 08:31:24.310731 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a100d227c95c3520942b5753486c050583e29c691708f3c33b4796300b558e" Nov 26 08:31:24 crc kubenswrapper[4909]: I1126 08:31:24.310436 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gsl8q" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.045995 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-55e3-account-create-rknvq"] Nov 26 08:31:31 crc kubenswrapper[4909]: E1126 08:31:31.047140 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ea1c00-0636-4f6b-982d-a5fc0ca216be" containerName="mariadb-database-create" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.047160 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ea1c00-0636-4f6b-982d-a5fc0ca216be" containerName="mariadb-database-create" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.047478 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ea1c00-0636-4f6b-982d-a5fc0ca216be" containerName="mariadb-database-create" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.048308 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-55e3-account-create-rknvq" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.051421 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.057399 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-55e3-account-create-rknvq"] Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.110255 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t59\" (UniqueName: \"kubernetes.io/projected/acf11ea3-34c1-4351-8799-24196560d05e-kube-api-access-85t59\") pod \"cinder-55e3-account-create-rknvq\" (UID: \"acf11ea3-34c1-4351-8799-24196560d05e\") " pod="openstack/cinder-55e3-account-create-rknvq" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.212500 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85t59\" (UniqueName: \"kubernetes.io/projected/acf11ea3-34c1-4351-8799-24196560d05e-kube-api-access-85t59\") pod \"cinder-55e3-account-create-rknvq\" (UID: \"acf11ea3-34c1-4351-8799-24196560d05e\") " pod="openstack/cinder-55e3-account-create-rknvq" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.232356 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85t59\" (UniqueName: \"kubernetes.io/projected/acf11ea3-34c1-4351-8799-24196560d05e-kube-api-access-85t59\") pod \"cinder-55e3-account-create-rknvq\" (UID: \"acf11ea3-34c1-4351-8799-24196560d05e\") " pod="openstack/cinder-55e3-account-create-rknvq" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.379293 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-55e3-account-create-rknvq" Nov 26 08:31:31 crc kubenswrapper[4909]: I1126 08:31:31.852855 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-55e3-account-create-rknvq"] Nov 26 08:31:32 crc kubenswrapper[4909]: I1126 08:31:32.387164 4909 generic.go:334] "Generic (PLEG): container finished" podID="acf11ea3-34c1-4351-8799-24196560d05e" containerID="e5c3dc4167208114c5d6ff1321c86e6a285d9a8fd663e9ecd6d36ba6dbc6d67a" exitCode=0 Nov 26 08:31:32 crc kubenswrapper[4909]: I1126 08:31:32.387231 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-55e3-account-create-rknvq" event={"ID":"acf11ea3-34c1-4351-8799-24196560d05e","Type":"ContainerDied","Data":"e5c3dc4167208114c5d6ff1321c86e6a285d9a8fd663e9ecd6d36ba6dbc6d67a"} Nov 26 08:31:32 crc kubenswrapper[4909]: I1126 08:31:32.387452 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-55e3-account-create-rknvq" event={"ID":"acf11ea3-34c1-4351-8799-24196560d05e","Type":"ContainerStarted","Data":"afb6851b9e92fba96e23ef7b1460d297506827eac47c16f583bf46b1c40582c6"} Nov 26 08:31:33 crc kubenswrapper[4909]: I1126 08:31:33.802400 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-55e3-account-create-rknvq" Nov 26 08:31:33 crc kubenswrapper[4909]: I1126 08:31:33.873890 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85t59\" (UniqueName: \"kubernetes.io/projected/acf11ea3-34c1-4351-8799-24196560d05e-kube-api-access-85t59\") pod \"acf11ea3-34c1-4351-8799-24196560d05e\" (UID: \"acf11ea3-34c1-4351-8799-24196560d05e\") " Nov 26 08:31:33 crc kubenswrapper[4909]: I1126 08:31:33.879817 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf11ea3-34c1-4351-8799-24196560d05e-kube-api-access-85t59" (OuterVolumeSpecName: "kube-api-access-85t59") pod "acf11ea3-34c1-4351-8799-24196560d05e" (UID: "acf11ea3-34c1-4351-8799-24196560d05e"). InnerVolumeSpecName "kube-api-access-85t59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:33 crc kubenswrapper[4909]: I1126 08:31:33.976448 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85t59\" (UniqueName: \"kubernetes.io/projected/acf11ea3-34c1-4351-8799-24196560d05e-kube-api-access-85t59\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:34 crc kubenswrapper[4909]: I1126 08:31:34.411083 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-55e3-account-create-rknvq" event={"ID":"acf11ea3-34c1-4351-8799-24196560d05e","Type":"ContainerDied","Data":"afb6851b9e92fba96e23ef7b1460d297506827eac47c16f583bf46b1c40582c6"} Nov 26 08:31:34 crc kubenswrapper[4909]: I1126 08:31:34.411389 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb6851b9e92fba96e23ef7b1460d297506827eac47c16f583bf46b1c40582c6" Nov 26 08:31:34 crc kubenswrapper[4909]: I1126 08:31:34.411156 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-55e3-account-create-rknvq" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.192249 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-79m9z"] Nov 26 08:31:36 crc kubenswrapper[4909]: E1126 08:31:36.193150 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf11ea3-34c1-4351-8799-24196560d05e" containerName="mariadb-account-create" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.193175 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf11ea3-34c1-4351-8799-24196560d05e" containerName="mariadb-account-create" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.193471 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf11ea3-34c1-4351-8799-24196560d05e" containerName="mariadb-account-create" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.194828 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.198200 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8926d" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.198752 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.201102 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.216686 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-79m9z"] Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.324999 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z766p\" (UniqueName: \"kubernetes.io/projected/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-kube-api-access-z766p\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.325263 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-config-data\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.325327 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-db-sync-config-data\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.325419 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-etc-machine-id\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.325478 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-combined-ca-bundle\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.325559 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-scripts\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.427225 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-etc-machine-id\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.427277 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-combined-ca-bundle\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.427360 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-scripts\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.427364 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-etc-machine-id\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.427407 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z766p\" (UniqueName: \"kubernetes.io/projected/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-kube-api-access-z766p\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.427435 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-config-data\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.427669 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-db-sync-config-data\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.434531 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-config-data\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.435121 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-scripts\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.435146 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-db-sync-config-data\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.435670 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-combined-ca-bundle\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.452218 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z766p\" (UniqueName: \"kubernetes.io/projected/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-kube-api-access-z766p\") pod \"cinder-db-sync-79m9z\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:36 crc kubenswrapper[4909]: I1126 08:31:36.528863 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.087690 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-79m9z"] Nov 26 08:31:37 crc kubenswrapper[4909]: W1126 08:31:37.104975 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f53c6ed_d2ac_4e96_8aa5_82427f3c3f7b.slice/crio-70da0ac485415b159eeab17345a31b45b273f2252942e2642f20df3403dd6745 WatchSource:0}: Error finding container 70da0ac485415b159eeab17345a31b45b273f2252942e2642f20df3403dd6745: Status 404 returned error can't find the container with id 70da0ac485415b159eeab17345a31b45b273f2252942e2642f20df3403dd6745 Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.300932 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.301006 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.301055 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.301929 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5a2ca2aa716ec654cdedcf8d1dd83703540811f543055e7744242b9dcfda8f7"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.301990 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://e5a2ca2aa716ec654cdedcf8d1dd83703540811f543055e7744242b9dcfda8f7" gracePeriod=600 Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.448396 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79m9z" event={"ID":"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b","Type":"ContainerStarted","Data":"70da0ac485415b159eeab17345a31b45b273f2252942e2642f20df3403dd6745"} Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.453930 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="e5a2ca2aa716ec654cdedcf8d1dd83703540811f543055e7744242b9dcfda8f7" exitCode=0 Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.454002 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"e5a2ca2aa716ec654cdedcf8d1dd83703540811f543055e7744242b9dcfda8f7"} Nov 26 08:31:37 crc kubenswrapper[4909]: I1126 08:31:37.454040 4909 scope.go:117] "RemoveContainer" containerID="0c9ea47b15bd4bc4839712cf193fd0208944b76c4479e71014f046925d12be36" Nov 26 08:31:38 crc kubenswrapper[4909]: I1126 08:31:38.470054 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79m9z" event={"ID":"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b","Type":"ContainerStarted","Data":"7f810459f6eef7e1cfaee2f920bada1fe149752d8588ce07282dc1875e677788"} Nov 26 08:31:38 crc kubenswrapper[4909]: I1126 08:31:38.476465 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed"} Nov 26 08:31:38 crc kubenswrapper[4909]: I1126 08:31:38.498009 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-79m9z" podStartSLOduration=2.497990252 podStartE2EDuration="2.497990252s" podCreationTimestamp="2025-11-26 08:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:31:38.490912579 +0000 UTC m=+5470.637123745" watchObservedRunningTime="2025-11-26 08:31:38.497990252 +0000 UTC m=+5470.644201418" Nov 26 08:31:41 crc kubenswrapper[4909]: I1126 08:31:41.510338 4909 generic.go:334] "Generic (PLEG): container finished" podID="1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" containerID="7f810459f6eef7e1cfaee2f920bada1fe149752d8588ce07282dc1875e677788" exitCode=0 Nov 26 08:31:41 crc kubenswrapper[4909]: I1126 08:31:41.510455 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79m9z" event={"ID":"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b","Type":"ContainerDied","Data":"7f810459f6eef7e1cfaee2f920bada1fe149752d8588ce07282dc1875e677788"} Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.911241 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.963294 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-config-data\") pod \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.963465 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z766p\" (UniqueName: \"kubernetes.io/projected/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-kube-api-access-z766p\") pod \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.963507 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-combined-ca-bundle\") pod \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.963540 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-etc-machine-id\") pod \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.963567 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-scripts\") pod \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.963641 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-db-sync-config-data\") pod \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\" (UID: \"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b\") " Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.964518 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" (UID: "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.971809 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" (UID: "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.976781 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-scripts" (OuterVolumeSpecName: "scripts") pod "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" (UID: "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:42 crc kubenswrapper[4909]: I1126 08:31:42.995788 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-kube-api-access-z766p" (OuterVolumeSpecName: "kube-api-access-z766p") pod "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" (UID: "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b"). InnerVolumeSpecName "kube-api-access-z766p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.041306 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" (UID: "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.065543 4909 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.065586 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z766p\" (UniqueName: \"kubernetes.io/projected/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-kube-api-access-z766p\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.065679 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.065690 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.065701 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.081759 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-config-data" (OuterVolumeSpecName: "config-data") pod "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" (UID: "1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.167504 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.539532 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79m9z" event={"ID":"1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b","Type":"ContainerDied","Data":"70da0ac485415b159eeab17345a31b45b273f2252942e2642f20df3403dd6745"} Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.539619 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70da0ac485415b159eeab17345a31b45b273f2252942e2642f20df3403dd6745" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.539661 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79m9z" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.951250 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7784748f7f-s6hvz"] Nov 26 08:31:43 crc kubenswrapper[4909]: E1126 08:31:43.951706 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" containerName="cinder-db-sync" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.951722 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" containerName="cinder-db-sync" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.951901 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" containerName="cinder-db-sync" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.952913 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.971095 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7784748f7f-s6hvz"] Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.996132 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9dw\" (UniqueName: \"kubernetes.io/projected/ca293453-2173-43be-a1cc-a7da8c47f256-kube-api-access-fv9dw\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.996193 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-dns-svc\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.996214 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-sb\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.996247 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-nb\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:43 crc kubenswrapper[4909]: I1126 08:31:43.996282 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-config\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.098153 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-dns-svc\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.098524 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-sb\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.098564 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-nb\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.098711 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-config\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.098813 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv9dw\" (UniqueName: \"kubernetes.io/projected/ca293453-2173-43be-a1cc-a7da8c47f256-kube-api-access-fv9dw\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.099728 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-sb\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.099740 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-nb\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.099858 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-config\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.099965 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-dns-svc\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.117393 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv9dw\" (UniqueName: \"kubernetes.io/projected/ca293453-2173-43be-a1cc-a7da8c47f256-kube-api-access-fv9dw\") pod \"dnsmasq-dns-7784748f7f-s6hvz\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.174559 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.176393 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.178313 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8926d" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.183490 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.183489 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.183970 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.186695 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.302750 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.302831 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.302886 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-scripts\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.302960 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.303027 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w2p4\" (UniqueName: \"kubernetes.io/projected/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-kube-api-access-7w2p4\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.303133 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.303164 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-logs\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.329269 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.406512 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.406567 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-logs\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.406637 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.406659 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.406689 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-scripts\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.406723 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.406763 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w2p4\" (UniqueName: \"kubernetes.io/projected/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-kube-api-access-7w2p4\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.410132 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-logs\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.410207 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.417149 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.419905 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-scripts\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.420910 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.429320 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.430117 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w2p4\" (UniqueName: \"kubernetes.io/projected/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-kube-api-access-7w2p4\") pod \"cinder-api-0\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.493516 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 08:31:44 crc kubenswrapper[4909]: I1126 08:31:44.727891 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7784748f7f-s6hvz"] Nov 26 08:31:45 crc kubenswrapper[4909]: I1126 08:31:45.034652 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:31:45 crc kubenswrapper[4909]: I1126 08:31:45.601669 4909 generic.go:334] "Generic (PLEG): container finished" podID="ca293453-2173-43be-a1cc-a7da8c47f256" containerID="fe88b164336357d8d81c6d20cd5d3123feeab49995c9dcd369e89a46f755ad48" exitCode=0 Nov 26 08:31:45 crc kubenswrapper[4909]: I1126 08:31:45.602060 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" event={"ID":"ca293453-2173-43be-a1cc-a7da8c47f256","Type":"ContainerDied","Data":"fe88b164336357d8d81c6d20cd5d3123feeab49995c9dcd369e89a46f755ad48"} Nov 26 08:31:45 crc kubenswrapper[4909]: I1126 08:31:45.602091 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" event={"ID":"ca293453-2173-43be-a1cc-a7da8c47f256","Type":"ContainerStarted","Data":"93f354b7fdb0efc2eca3e5d46cc1c3f3f3af48e971ef47a2b0a0be915b5b32ce"} Nov 26 08:31:45 crc kubenswrapper[4909]: I1126 08:31:45.608061 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72dabde5-9582-45ca-ab0d-fa93aa5f28bd","Type":"ContainerStarted","Data":"58edea080008f9b80ebaf32277d5822022b2ef2bdf680c1f6dfa9b145e9080d2"} Nov 26 08:31:46 crc kubenswrapper[4909]: I1126 08:31:46.640213 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" event={"ID":"ca293453-2173-43be-a1cc-a7da8c47f256","Type":"ContainerStarted","Data":"201c9a16c55228024ecb09c501c43455b4410a46d06abbac683c977662ed7d1b"} Nov 26 08:31:46 crc kubenswrapper[4909]: I1126 08:31:46.640510 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:46 crc kubenswrapper[4909]: I1126 08:31:46.651979 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72dabde5-9582-45ca-ab0d-fa93aa5f28bd","Type":"ContainerStarted","Data":"12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05"} Nov 26 08:31:46 crc kubenswrapper[4909]: I1126 08:31:46.652040 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72dabde5-9582-45ca-ab0d-fa93aa5f28bd","Type":"ContainerStarted","Data":"57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5"} Nov 26 08:31:46 crc kubenswrapper[4909]: I1126 08:31:46.652471 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 26 08:31:46 crc kubenswrapper[4909]: I1126 08:31:46.672958 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" podStartSLOduration=3.672934947 podStartE2EDuration="3.672934947s" podCreationTimestamp="2025-11-26 08:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:31:46.666944003 +0000 UTC m=+5478.813155169" watchObservedRunningTime="2025-11-26 08:31:46.672934947 +0000 UTC m=+5478.819146113" Nov 26 08:31:46 crc kubenswrapper[4909]: I1126 08:31:46.687035 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.687012981 podStartE2EDuration="2.687012981s" podCreationTimestamp="2025-11-26 08:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:31:46.684056971 +0000 UTC m=+5478.830268137" watchObservedRunningTime="2025-11-26 08:31:46.687012981 +0000 UTC m=+5478.833224147" Nov 26 08:31:54 crc kubenswrapper[4909]: I1126 08:31:54.331763 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:31:54 crc kubenswrapper[4909]: I1126 08:31:54.423070 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85c7886d8f-sk5nt"] Nov 26 08:31:54 crc kubenswrapper[4909]: I1126 08:31:54.423655 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" podUID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerName="dnsmasq-dns" containerID="cri-o://495f61a68ab35e62a60a9cd3656c700e924c84757de94ca4bdd3896b0cc539ba" gracePeriod=10 Nov 26 08:31:54 crc kubenswrapper[4909]: I1126 08:31:54.759159 4909 generic.go:334] "Generic (PLEG): container finished" podID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerID="495f61a68ab35e62a60a9cd3656c700e924c84757de94ca4bdd3896b0cc539ba" exitCode=0 Nov 26 08:31:54 crc kubenswrapper[4909]: I1126 08:31:54.759357 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" event={"ID":"7e92ea5e-40a4-4095-a688-88a290d337d3","Type":"ContainerDied","Data":"495f61a68ab35e62a60a9cd3656c700e924c84757de94ca4bdd3896b0cc539ba"} Nov 26 08:31:54 crc kubenswrapper[4909]: I1126 08:31:54.910515 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.009356 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-nb\") pod \"7e92ea5e-40a4-4095-a688-88a290d337d3\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.009583 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-sb\") pod \"7e92ea5e-40a4-4095-a688-88a290d337d3\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.009637 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhdq9\" (UniqueName: \"kubernetes.io/projected/7e92ea5e-40a4-4095-a688-88a290d337d3-kube-api-access-fhdq9\") pod \"7e92ea5e-40a4-4095-a688-88a290d337d3\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.009674 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-config\") pod \"7e92ea5e-40a4-4095-a688-88a290d337d3\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.009758 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-dns-svc\") pod \"7e92ea5e-40a4-4095-a688-88a290d337d3\" (UID: \"7e92ea5e-40a4-4095-a688-88a290d337d3\") " Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.020848 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e92ea5e-40a4-4095-a688-88a290d337d3-kube-api-access-fhdq9" (OuterVolumeSpecName: "kube-api-access-fhdq9") pod "7e92ea5e-40a4-4095-a688-88a290d337d3" (UID: "7e92ea5e-40a4-4095-a688-88a290d337d3"). InnerVolumeSpecName "kube-api-access-fhdq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.100970 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e92ea5e-40a4-4095-a688-88a290d337d3" (UID: "7e92ea5e-40a4-4095-a688-88a290d337d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.107387 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-config" (OuterVolumeSpecName: "config") pod "7e92ea5e-40a4-4095-a688-88a290d337d3" (UID: "7e92ea5e-40a4-4095-a688-88a290d337d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.118056 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.118110 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhdq9\" (UniqueName: \"kubernetes.io/projected/7e92ea5e-40a4-4095-a688-88a290d337d3-kube-api-access-fhdq9\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.118136 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.126375 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e92ea5e-40a4-4095-a688-88a290d337d3" (UID: "7e92ea5e-40a4-4095-a688-88a290d337d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.127972 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e92ea5e-40a4-4095-a688-88a290d337d3" (UID: "7e92ea5e-40a4-4095-a688-88a290d337d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.219646 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.219934 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92ea5e-40a4-4095-a688-88a290d337d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.769211 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" event={"ID":"7e92ea5e-40a4-4095-a688-88a290d337d3","Type":"ContainerDied","Data":"9deae93226182a6ceb0c3f3f506264d654a519abf9d226ce5f75eefa27c2719f"} Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.769257 4909 scope.go:117] "RemoveContainer" containerID="495f61a68ab35e62a60a9cd3656c700e924c84757de94ca4bdd3896b0cc539ba" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.769373 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c7886d8f-sk5nt" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.813101 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.813292 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0765164d-a12f-4917-a71c-c909a27d4ba6" containerName="nova-scheduler-scheduler" containerID="cri-o://d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba" gracePeriod=30 Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.823491 4909 scope.go:117] "RemoveContainer" containerID="35c59070c133e6c217341830cda91c0170538066bbc2f014e585c6fede4cec5e" Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.837510 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.837756 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9fab9a27-8a55-4940-9006-be7909597eff" containerName="nova-cell0-conductor-conductor" containerID="cri-o://3b81c3f18f99dfa911806847beaacd3c5cc182f9ca0bf56045a82e37d930f84b" gracePeriod=30 Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.850644 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.850878 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-log" containerID="cri-o://e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c" gracePeriod=30 Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.850999 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-metadata" containerID="cri-o://0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953" gracePeriod=30 Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.859737 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.860194 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-log" containerID="cri-o://b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186" gracePeriod=30 Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.860330 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-api" containerID="cri-o://694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026" gracePeriod=30 Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.879720 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85c7886d8f-sk5nt"] Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.893292 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85c7886d8f-sk5nt"] Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.906729 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:31:55 crc kubenswrapper[4909]: I1126 08:31:55.906959 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0a0db1d6-8015-4a27-9c53-ad181dceb4eb" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a" gracePeriod=30 Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.540668 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e92ea5e-40a4-4095-a688-88a290d337d3" path="/var/lib/kubelet/pods/7e92ea5e-40a4-4095-a688-88a290d337d3/volumes" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.582028 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.734183 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.754036 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqwsb\" (UniqueName: \"kubernetes.io/projected/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-kube-api-access-xqwsb\") pod \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.754093 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-config-data\") pod \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.754205 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-combined-ca-bundle\") pod \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\" (UID: \"0a0db1d6-8015-4a27-9c53-ad181dceb4eb\") " Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.762673 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-kube-api-access-xqwsb" (OuterVolumeSpecName: "kube-api-access-xqwsb") pod "0a0db1d6-8015-4a27-9c53-ad181dceb4eb" (UID: "0a0db1d6-8015-4a27-9c53-ad181dceb4eb"). InnerVolumeSpecName "kube-api-access-xqwsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.790171 4909 generic.go:334] "Generic (PLEG): container finished" podID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerID="e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c" exitCode=143 Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.790330 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32","Type":"ContainerDied","Data":"e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c"} Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.792403 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-config-data" (OuterVolumeSpecName: "config-data") pod "0a0db1d6-8015-4a27-9c53-ad181dceb4eb" (UID: "0a0db1d6-8015-4a27-9c53-ad181dceb4eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.795887 4909 generic.go:334] "Generic (PLEG): container finished" podID="0a0db1d6-8015-4a27-9c53-ad181dceb4eb" containerID="eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a" exitCode=0 Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.795912 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.795978 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0a0db1d6-8015-4a27-9c53-ad181dceb4eb","Type":"ContainerDied","Data":"eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a"} Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.796006 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0a0db1d6-8015-4a27-9c53-ad181dceb4eb","Type":"ContainerDied","Data":"0ea65c78dafd70907f00212d3aaebd9ba2c95d88b7f12c5d6ffcff867df1e91f"} Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.796040 4909 scope.go:117] "RemoveContainer" containerID="eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.816115 4909 generic.go:334] "Generic (PLEG): container finished" podID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerID="b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186" exitCode=143 Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.816175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658a98a6-1dad-44ac-b9b3-1e4cf09ca063","Type":"ContainerDied","Data":"b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186"} Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.822345 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0db1d6-8015-4a27-9c53-ad181dceb4eb" (UID: "0a0db1d6-8015-4a27-9c53-ad181dceb4eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.855309 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.855340 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqwsb\" (UniqueName: \"kubernetes.io/projected/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-kube-api-access-xqwsb\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.855349 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0db1d6-8015-4a27-9c53-ad181dceb4eb-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.960066 4909 scope.go:117] "RemoveContainer" containerID="eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a" Nov 26 08:31:56 crc kubenswrapper[4909]: E1126 08:31:56.960638 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a\": container with ID starting with eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a not found: ID does not exist" containerID="eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a" Nov 26 08:31:56 crc kubenswrapper[4909]: I1126 08:31:56.960698 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a"} err="failed to get container status \"eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a\": rpc error: code = NotFound desc = could not find container \"eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a\": container with ID starting with eb26f757c63a5dbf98acd3ea550eef1f52ee1d3bfaeeca810c352acea010806a not found: ID does not exist" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.093702 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.140457 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.163225 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.178307 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: E1126 08:31:57.178820 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerName="dnsmasq-dns" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.178841 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerName="dnsmasq-dns" Nov 26 08:31:57 crc kubenswrapper[4909]: E1126 08:31:57.178881 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerName="init" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.178890 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerName="init" Nov 26 08:31:57 crc kubenswrapper[4909]: E1126 08:31:57.178908 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0db1d6-8015-4a27-9c53-ad181dceb4eb" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.178917 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0db1d6-8015-4a27-9c53-ad181dceb4eb" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 08:31:57 crc kubenswrapper[4909]: E1126 08:31:57.178963 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0765164d-a12f-4917-a71c-c909a27d4ba6" containerName="nova-scheduler-scheduler" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.178972 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="0765164d-a12f-4917-a71c-c909a27d4ba6" containerName="nova-scheduler-scheduler" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.179183 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e92ea5e-40a4-4095-a688-88a290d337d3" containerName="dnsmasq-dns" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.179226 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0db1d6-8015-4a27-9c53-ad181dceb4eb" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.179248 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="0765164d-a12f-4917-a71c-c909a27d4ba6" containerName="nova-scheduler-scheduler" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.180034 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.182241 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.183390 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.260844 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkw87\" (UniqueName: \"kubernetes.io/projected/0765164d-a12f-4917-a71c-c909a27d4ba6-kube-api-access-hkw87\") pod \"0765164d-a12f-4917-a71c-c909a27d4ba6\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.260937 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-combined-ca-bundle\") pod \"0765164d-a12f-4917-a71c-c909a27d4ba6\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.261186 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-config-data\") pod \"0765164d-a12f-4917-a71c-c909a27d4ba6\" (UID: \"0765164d-a12f-4917-a71c-c909a27d4ba6\") " Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.261669 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07294b7f-cf09-4c22-a428-5c25bb75ae6f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.261846 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrv4\" (UniqueName: \"kubernetes.io/projected/07294b7f-cf09-4c22-a428-5c25bb75ae6f-kube-api-access-nkrv4\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.261928 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07294b7f-cf09-4c22-a428-5c25bb75ae6f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.263897 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0765164d-a12f-4917-a71c-c909a27d4ba6-kube-api-access-hkw87" (OuterVolumeSpecName: "kube-api-access-hkw87") pod "0765164d-a12f-4917-a71c-c909a27d4ba6" (UID: "0765164d-a12f-4917-a71c-c909a27d4ba6"). InnerVolumeSpecName "kube-api-access-hkw87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.284171 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0765164d-a12f-4917-a71c-c909a27d4ba6" (UID: "0765164d-a12f-4917-a71c-c909a27d4ba6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.284252 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-config-data" (OuterVolumeSpecName: "config-data") pod "0765164d-a12f-4917-a71c-c909a27d4ba6" (UID: "0765164d-a12f-4917-a71c-c909a27d4ba6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.363016 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07294b7f-cf09-4c22-a428-5c25bb75ae6f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.363162 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07294b7f-cf09-4c22-a428-5c25bb75ae6f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.363672 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrv4\" (UniqueName: \"kubernetes.io/projected/07294b7f-cf09-4c22-a428-5c25bb75ae6f-kube-api-access-nkrv4\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.363735 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkw87\" (UniqueName: \"kubernetes.io/projected/0765164d-a12f-4917-a71c-c909a27d4ba6-kube-api-access-hkw87\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.363749 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.363761 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0765164d-a12f-4917-a71c-c909a27d4ba6-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.366349 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07294b7f-cf09-4c22-a428-5c25bb75ae6f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.367699 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07294b7f-cf09-4c22-a428-5c25bb75ae6f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.378409 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrv4\" (UniqueName: \"kubernetes.io/projected/07294b7f-cf09-4c22-a428-5c25bb75ae6f-kube-api-access-nkrv4\") pod \"nova-cell1-novncproxy-0\" (UID: \"07294b7f-cf09-4c22-a428-5c25bb75ae6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.497613 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.829361 4909 generic.go:334] "Generic (PLEG): container finished" podID="0765164d-a12f-4917-a71c-c909a27d4ba6" containerID="d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba" exitCode=0 Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.829413 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0765164d-a12f-4917-a71c-c909a27d4ba6","Type":"ContainerDied","Data":"d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba"} Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.829810 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0765164d-a12f-4917-a71c-c909a27d4ba6","Type":"ContainerDied","Data":"108b766365f0b11c8f1538c22c5e2de3836136b7fa16e245a140f6ef0ad4f34c"} Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.829836 4909 scope.go:117] "RemoveContainer" containerID="d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.829430 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.862695 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.870809 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.880684 4909 scope.go:117] "RemoveContainer" containerID="d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba" Nov 26 08:31:57 crc kubenswrapper[4909]: E1126 08:31:57.881733 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba\": container with ID starting with d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba not found: ID does not exist" containerID="d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.881763 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba"} err="failed to get container status \"d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba\": rpc error: code = NotFound desc = could not find container \"d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba\": container with ID starting with d71c7eb701c643c5780deeadf8cbb39c9e2cd2de29084ff627592e1d427e12ba not found: ID does not exist" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.895505 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.897031 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.903411 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.904440 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.973113 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.973387 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-config-data\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.973507 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbn9n\" (UniqueName: \"kubernetes.io/projected/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-kube-api-access-vbn9n\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:57 crc kubenswrapper[4909]: I1126 08:31:57.995231 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.074779 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-config-data\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.074859 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbn9n\" (UniqueName: \"kubernetes.io/projected/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-kube-api-access-vbn9n\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.074934 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.079213 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.079620 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-config-data\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.100763 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbn9n\" (UniqueName: \"kubernetes.io/projected/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-kube-api-access-vbn9n\") pod \"nova-scheduler-0\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " pod="openstack/nova-scheduler-0" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.215012 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.530957 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0765164d-a12f-4917-a71c-c909a27d4ba6" path="/var/lib/kubelet/pods/0765164d-a12f-4917-a71c-c909a27d4ba6/volumes" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.531725 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0db1d6-8015-4a27-9c53-ad181dceb4eb" path="/var/lib/kubelet/pods/0a0db1d6-8015-4a27-9c53-ad181dceb4eb/volumes" Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.717283 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.842903 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"098da7ec-6f47-4e30-8e5f-00b91d2c7c26","Type":"ContainerStarted","Data":"85b4f43872ae6a6db98d7bc81a7fa5cc06b1ffb16efb58e532f5fb0ff59bd14c"} Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.846388 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"07294b7f-cf09-4c22-a428-5c25bb75ae6f","Type":"ContainerStarted","Data":"ffe20a725c713da844b6dddcbba5f6ff7292cfb170539c3c784bf13527673b17"} Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.846411 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"07294b7f-cf09-4c22-a428-5c25bb75ae6f","Type":"ContainerStarted","Data":"e8f5a796c7f16a2f5b6f3f1abfe6acb67cd5a8f260d97474036cafd81d43c4bc"} Nov 26 08:31:58 crc kubenswrapper[4909]: I1126 08:31:58.881324 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.8813057020000001 podStartE2EDuration="1.881305702s" podCreationTimestamp="2025-11-26 08:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:31:58.871550076 +0000 UTC m=+5491.017761242" watchObservedRunningTime="2025-11-26 08:31:58.881305702 +0000 UTC m=+5491.027516858" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.077865 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": read tcp 10.217.0.2:39406->10.217.1.87:8775: read: connection reset by peer" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.078271 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": read tcp 10.217.0.2:39418->10.217.1.87:8775: read: connection reset by peer" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.114162 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.114391 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" containerName="nova-cell1-conductor-conductor" containerID="cri-o://3937dce0f30503250c04023587ef92a3292ab7739c2bc77f4eeaa6660dc450f6" gracePeriod=30 Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.426934 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.532885 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.607702 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8vgq\" (UniqueName: \"kubernetes.io/projected/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-kube-api-access-k8vgq\") pod \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.607962 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-logs\") pod \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.608116 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-config-data\") pod \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.608188 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-combined-ca-bundle\") pod \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\" (UID: \"658a98a6-1dad-44ac-b9b3-1e4cf09ca063\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.612346 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-logs" (OuterVolumeSpecName: "logs") pod "658a98a6-1dad-44ac-b9b3-1e4cf09ca063" (UID: "658a98a6-1dad-44ac-b9b3-1e4cf09ca063"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.620533 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-kube-api-access-k8vgq" (OuterVolumeSpecName: "kube-api-access-k8vgq") pod "658a98a6-1dad-44ac-b9b3-1e4cf09ca063" (UID: "658a98a6-1dad-44ac-b9b3-1e4cf09ca063"). InnerVolumeSpecName "kube-api-access-k8vgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.661889 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "658a98a6-1dad-44ac-b9b3-1e4cf09ca063" (UID: "658a98a6-1dad-44ac-b9b3-1e4cf09ca063"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.668961 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-config-data" (OuterVolumeSpecName: "config-data") pod "658a98a6-1dad-44ac-b9b3-1e4cf09ca063" (UID: "658a98a6-1dad-44ac-b9b3-1e4cf09ca063"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.712804 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-logs\") pod \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.712910 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-config-data\") pod \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.712998 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfgkp\" (UniqueName: \"kubernetes.io/projected/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-kube-api-access-vfgkp\") pod \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.713031 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-combined-ca-bundle\") pod \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\" (UID: \"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32\") " Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.713343 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.713354 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.713363 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.713374 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8vgq\" (UniqueName: \"kubernetes.io/projected/658a98a6-1dad-44ac-b9b3-1e4cf09ca063-kube-api-access-k8vgq\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.718269 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-kube-api-access-vfgkp" (OuterVolumeSpecName: "kube-api-access-vfgkp") pod "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" (UID: "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32"). InnerVolumeSpecName "kube-api-access-vfgkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.722104 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-logs" (OuterVolumeSpecName: "logs") pod "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" (UID: "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.755931 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" (UID: "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.763723 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-config-data" (OuterVolumeSpecName: "config-data") pod "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" (UID: "e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.815671 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.815708 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfgkp\" (UniqueName: \"kubernetes.io/projected/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-kube-api-access-vfgkp\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.815720 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.815729 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.867995 4909 generic.go:334] "Generic (PLEG): container finished" podID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerID="694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026" exitCode=0 Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.868082 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658a98a6-1dad-44ac-b9b3-1e4cf09ca063","Type":"ContainerDied","Data":"694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026"} Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.868115 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"658a98a6-1dad-44ac-b9b3-1e4cf09ca063","Type":"ContainerDied","Data":"be08e51ebeb8fee184100de19bce7a07c30b6a9112b26fa2bf19edab51931ebf"} Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.868137 4909 scope.go:117] "RemoveContainer" containerID="694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.868373 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.877462 4909 generic.go:334] "Generic (PLEG): container finished" podID="9fab9a27-8a55-4940-9006-be7909597eff" containerID="3b81c3f18f99dfa911806847beaacd3c5cc182f9ca0bf56045a82e37d930f84b" exitCode=0 Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.877519 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9fab9a27-8a55-4940-9006-be7909597eff","Type":"ContainerDied","Data":"3b81c3f18f99dfa911806847beaacd3c5cc182f9ca0bf56045a82e37d930f84b"} Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.884755 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"098da7ec-6f47-4e30-8e5f-00b91d2c7c26","Type":"ContainerStarted","Data":"6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc"} Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.890840 4909 generic.go:334] "Generic (PLEG): container finished" podID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerID="0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953" exitCode=0 Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.891538 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.894070 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32","Type":"ContainerDied","Data":"0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953"} Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.894118 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32","Type":"ContainerDied","Data":"e46a09fa770cb60bfd2d7fb11d801d1ac575ed645548b7c8e54b296a20ef2d53"} Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.914799 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.9147756510000002 podStartE2EDuration="2.914775651s" podCreationTimestamp="2025-11-26 08:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:31:59.906389552 +0000 UTC m=+5492.052600718" watchObservedRunningTime="2025-11-26 08:31:59.914775651 +0000 UTC m=+5492.060986807" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.936906 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.943445 4909 scope.go:117] "RemoveContainer" containerID="b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186" Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.959411 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:31:59 crc kubenswrapper[4909]: I1126 08:31:59.997866 4909 scope.go:117] "RemoveContainer" containerID="694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.003497 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.004905 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026\": container with ID starting with 694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026 not found: ID does not exist" containerID="694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.004944 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026"} err="failed to get container status \"694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026\": rpc error: code = NotFound desc = could not find container \"694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026\": container with ID starting with 694d38efabbb542b6c02ad2c0d2a917706c2d0744b48325aa98de4605d8b3026 not found: ID does not exist" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.004968 4909 scope.go:117] "RemoveContainer" containerID="b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.011792 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.012833 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186\": container with ID starting with b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186 not found: ID does not exist" containerID="b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.012873 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186"} err="failed to get container status \"b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186\": rpc error: code = NotFound desc = could not find container \"b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186\": container with ID starting with b3620cad2d56f94f4fe3fc0e321903d246b68e0ba145091ae505c6eb773d3186 not found: ID does not exist" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.012892 4909 scope.go:117] "RemoveContainer" containerID="0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.020908 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.021297 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-metadata" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021313 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-metadata" Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.021325 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-api" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021331 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-api" Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.021368 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-log" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021373 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-log" Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.021391 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-log" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021396 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-log" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021551 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-log" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021559 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-api" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021570 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" containerName="nova-api-log" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.021585 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" containerName="nova-metadata-metadata" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.024205 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.028533 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.028883 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.037496 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.039257 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.045233 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.049705 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.086025 4909 scope.go:117] "RemoveContainer" containerID="e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.126506 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7q2\" (UniqueName: \"kubernetes.io/projected/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-kube-api-access-dc7q2\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.126665 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.126749 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-config-data\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.126807 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-logs\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.132904 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.149044 4909 scope.go:117] "RemoveContainer" containerID="0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953" Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.152090 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953\": container with ID starting with 0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953 not found: ID does not exist" containerID="0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.152274 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953"} err="failed to get container status \"0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953\": rpc error: code = NotFound desc = could not find container \"0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953\": container with ID starting with 0285edf54178e6317f5f5640d69edfd536471efce1e6a68790557d358bec7953 not found: ID does not exist" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.152424 4909 scope.go:117] "RemoveContainer" containerID="e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c" Nov 26 08:32:00 crc kubenswrapper[4909]: E1126 08:32:00.156704 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c\": container with ID starting with e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c not found: ID does not exist" containerID="e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.156858 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c"} err="failed to get container status \"e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c\": rpc error: code = NotFound desc = could not find container \"e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c\": container with ID starting with e413a2b94afba10c5c9795d76de82af2093d5db24e8336f5b3a97ee9d32e574c not found: ID does not exist" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228195 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228272 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-config-data\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228300 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da8256fc-4601-411d-9bfd-c86c73421537-logs\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228323 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228352 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-logs\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228372 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-config-data\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228436 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7q2\" (UniqueName: \"kubernetes.io/projected/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-kube-api-access-dc7q2\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.228454 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxc74\" (UniqueName: \"kubernetes.io/projected/da8256fc-4601-411d-9bfd-c86c73421537-kube-api-access-qxc74\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.230909 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-logs\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.233318 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-config-data\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.238011 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.247487 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7q2\" (UniqueName: \"kubernetes.io/projected/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-kube-api-access-dc7q2\") pod \"nova-metadata-0\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.329990 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-config-data\") pod \"9fab9a27-8a55-4940-9006-be7909597eff\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.330232 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnftp\" (UniqueName: \"kubernetes.io/projected/9fab9a27-8a55-4940-9006-be7909597eff-kube-api-access-jnftp\") pod \"9fab9a27-8a55-4940-9006-be7909597eff\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.330260 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-combined-ca-bundle\") pod \"9fab9a27-8a55-4940-9006-be7909597eff\" (UID: \"9fab9a27-8a55-4940-9006-be7909597eff\") " Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.330540 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxc74\" (UniqueName: \"kubernetes.io/projected/da8256fc-4601-411d-9bfd-c86c73421537-kube-api-access-qxc74\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.330639 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da8256fc-4601-411d-9bfd-c86c73421537-logs\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.330670 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.330702 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-config-data\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.331253 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da8256fc-4601-411d-9bfd-c86c73421537-logs\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.333968 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.335686 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fab9a27-8a55-4940-9006-be7909597eff-kube-api-access-jnftp" (OuterVolumeSpecName: "kube-api-access-jnftp") pod "9fab9a27-8a55-4940-9006-be7909597eff" (UID: "9fab9a27-8a55-4940-9006-be7909597eff"). InnerVolumeSpecName "kube-api-access-jnftp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.339740 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-config-data\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.348640 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxc74\" (UniqueName: \"kubernetes.io/projected/da8256fc-4601-411d-9bfd-c86c73421537-kube-api-access-qxc74\") pod \"nova-api-0\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.354520 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.363548 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-config-data" (OuterVolumeSpecName: "config-data") pod "9fab9a27-8a55-4940-9006-be7909597eff" (UID: "9fab9a27-8a55-4940-9006-be7909597eff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.367359 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fab9a27-8a55-4940-9006-be7909597eff" (UID: "9fab9a27-8a55-4940-9006-be7909597eff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.380056 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.437509 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.437574 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnftp\" (UniqueName: \"kubernetes.io/projected/9fab9a27-8a55-4940-9006-be7909597eff-kube-api-access-jnftp\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.437689 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fab9a27-8a55-4940-9006-be7909597eff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.513399 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658a98a6-1dad-44ac-b9b3-1e4cf09ca063" path="/var/lib/kubelet/pods/658a98a6-1dad-44ac-b9b3-1e4cf09ca063/volumes" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.514365 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32" path="/var/lib/kubelet/pods/e48ec2ae-d1ce-4d56-90f5-2b46b76d8d32/volumes" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.681219 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: W1126 08:32:00.687947 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3e723ef_619a_4ed8_a8b0_5920ccc5dfaa.slice/crio-6581367885cf086b1f5cc557bb0f963a1006754deba33ec0f424bbbe2afbfb05 WatchSource:0}: Error finding container 6581367885cf086b1f5cc557bb0f963a1006754deba33ec0f424bbbe2afbfb05: Status 404 returned error can't find the container with id 6581367885cf086b1f5cc557bb0f963a1006754deba33ec0f424bbbe2afbfb05 Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.851511 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.929034 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9fab9a27-8a55-4940-9006-be7909597eff","Type":"ContainerDied","Data":"130cb0f9c458f80b4443038a17c7c1140c4b35bbae2ae8b9286d971431c26937"} Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.929084 4909 scope.go:117] "RemoveContainer" containerID="3b81c3f18f99dfa911806847beaacd3c5cc182f9ca0bf56045a82e37d930f84b" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.929224 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.939910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa","Type":"ContainerStarted","Data":"6581367885cf086b1f5cc557bb0f963a1006754deba33ec0f424bbbe2afbfb05"} Nov 26 08:32:00 crc kubenswrapper[4909]: I1126 08:32:00.941179 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da8256fc-4601-411d-9bfd-c86c73421537","Type":"ContainerStarted","Data":"f7a84af66dd85cd5e88ecf13cb047cb577a1eb2933060c4d4da428e1f11b9d90"} Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.007533 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.037329 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.037391 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:32:01 crc kubenswrapper[4909]: E1126 08:32:01.042322 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fab9a27-8a55-4940-9006-be7909597eff" containerName="nova-cell0-conductor-conductor" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.042353 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fab9a27-8a55-4940-9006-be7909597eff" containerName="nova-cell0-conductor-conductor" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.042627 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fab9a27-8a55-4940-9006-be7909597eff" containerName="nova-cell0-conductor-conductor" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.044231 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.044318 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.052431 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.118937 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdzf\" (UniqueName: \"kubernetes.io/projected/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-kube-api-access-ktdzf\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.119004 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.119023 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.220305 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktdzf\" (UniqueName: \"kubernetes.io/projected/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-kube-api-access-ktdzf\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.220374 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.220392 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.239622 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.239697 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.242545 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktdzf\" (UniqueName: \"kubernetes.io/projected/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-kube-api-access-ktdzf\") pod \"nova-cell0-conductor-0\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.364180 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.808150 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 08:32:01 crc kubenswrapper[4909]: W1126 08:32:01.821804 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bd6851d_7422_4701_a9da_1ab5ca8ce7df.slice/crio-9a38596b015427ea967312dbbe23de7533a07db55b58ce14c5703c883df1dc90 WatchSource:0}: Error finding container 9a38596b015427ea967312dbbe23de7533a07db55b58ce14c5703c883df1dc90: Status 404 returned error can't find the container with id 9a38596b015427ea967312dbbe23de7533a07db55b58ce14c5703c883df1dc90 Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.950877 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bd6851d-7422-4701-a9da-1ab5ca8ce7df","Type":"ContainerStarted","Data":"9a38596b015427ea967312dbbe23de7533a07db55b58ce14c5703c883df1dc90"} Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.954141 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa","Type":"ContainerStarted","Data":"29c3c076a9f890b0e35d179ff254261051f48c2c9f224b18f80d2267f5a2f21b"} Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.954175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa","Type":"ContainerStarted","Data":"d54e82eef0a7befc7107ab598f02c01e2f8dc84ceccec5f8a8f692bd196a22aa"} Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.958342 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da8256fc-4601-411d-9bfd-c86c73421537","Type":"ContainerStarted","Data":"fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e"} Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.958374 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da8256fc-4601-411d-9bfd-c86c73421537","Type":"ContainerStarted","Data":"a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82"} Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.978087 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.978065848 podStartE2EDuration="2.978065848s" podCreationTimestamp="2025-11-26 08:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:32:01.970533523 +0000 UTC m=+5494.116744689" watchObservedRunningTime="2025-11-26 08:32:01.978065848 +0000 UTC m=+5494.124277014" Nov 26 08:32:01 crc kubenswrapper[4909]: I1126 08:32:01.998815 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.998789805 podStartE2EDuration="2.998789805s" podCreationTimestamp="2025-11-26 08:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:32:01.98949853 +0000 UTC m=+5494.135709706" watchObservedRunningTime="2025-11-26 08:32:01.998789805 +0000 UTC m=+5494.145000971" Nov 26 08:32:02 crc kubenswrapper[4909]: I1126 08:32:02.498167 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:32:02 crc kubenswrapper[4909]: I1126 08:32:02.509421 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fab9a27-8a55-4940-9006-be7909597eff" path="/var/lib/kubelet/pods/9fab9a27-8a55-4940-9006-be7909597eff/volumes" Nov 26 08:32:02 crc kubenswrapper[4909]: I1126 08:32:02.986785 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bd6851d-7422-4701-a9da-1ab5ca8ce7df","Type":"ContainerStarted","Data":"e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63"} Nov 26 08:32:02 crc kubenswrapper[4909]: I1126 08:32:02.987112 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:02 crc kubenswrapper[4909]: I1126 08:32:02.988969 4909 generic.go:334] "Generic (PLEG): container finished" podID="f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" containerID="3937dce0f30503250c04023587ef92a3292ab7739c2bc77f4eeaa6660dc450f6" exitCode=0 Nov 26 08:32:02 crc kubenswrapper[4909]: I1126 08:32:02.989779 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714","Type":"ContainerDied","Data":"3937dce0f30503250c04023587ef92a3292ab7739c2bc77f4eeaa6660dc450f6"} Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.010533 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.010516969 podStartE2EDuration="3.010516969s" podCreationTimestamp="2025-11-26 08:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:32:03.000873995 +0000 UTC m=+5495.147085161" watchObservedRunningTime="2025-11-26 08:32:03.010516969 +0000 UTC m=+5495.156728135" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.148473 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.215947 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.255076 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-config-data\") pod \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.255198 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnn66\" (UniqueName: \"kubernetes.io/projected/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-kube-api-access-wnn66\") pod \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.255231 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-combined-ca-bundle\") pod \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\" (UID: \"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714\") " Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.261705 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-kube-api-access-wnn66" (OuterVolumeSpecName: "kube-api-access-wnn66") pod "f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" (UID: "f9b3a4ae-809b-43b1-b4e1-3d4815a7a714"). InnerVolumeSpecName "kube-api-access-wnn66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.282236 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" (UID: "f9b3a4ae-809b-43b1-b4e1-3d4815a7a714"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.290358 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-config-data" (OuterVolumeSpecName: "config-data") pod "f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" (UID: "f9b3a4ae-809b-43b1-b4e1-3d4815a7a714"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.357425 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.357472 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnn66\" (UniqueName: \"kubernetes.io/projected/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-kube-api-access-wnn66\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:03 crc kubenswrapper[4909]: I1126 08:32:03.357490 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.002473 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.003791 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9b3a4ae-809b-43b1-b4e1-3d4815a7a714","Type":"ContainerDied","Data":"80ada2ffc25978000898eec42ba1f609341f7c0da12f5588a7dce7d602d266c1"} Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.003862 4909 scope.go:117] "RemoveContainer" containerID="3937dce0f30503250c04023587ef92a3292ab7739c2bc77f4eeaa6660dc450f6" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.042986 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.053933 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.084199 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:32:04 crc kubenswrapper[4909]: E1126 08:32:04.085666 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" containerName="nova-cell1-conductor-conductor" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.085698 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" containerName="nova-cell1-conductor-conductor" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.093479 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" containerName="nova-cell1-conductor-conductor" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.096037 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.102534 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.122971 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.177043 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf55k\" (UniqueName: \"kubernetes.io/projected/e5855f59-14c5-493a-ad57-d8a9cea9a517-kube-api-access-cf55k\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.177108 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.177529 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.279756 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.280285 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf55k\" (UniqueName: \"kubernetes.io/projected/e5855f59-14c5-493a-ad57-d8a9cea9a517-kube-api-access-cf55k\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.280466 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.284129 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.298642 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.302559 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf55k\" (UniqueName: \"kubernetes.io/projected/e5855f59-14c5-493a-ad57-d8a9cea9a517-kube-api-access-cf55k\") pod \"nova-cell1-conductor-0\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.425556 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.511560 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b3a4ae-809b-43b1-b4e1-3d4815a7a714" path="/var/lib/kubelet/pods/f9b3a4ae-809b-43b1-b4e1-3d4815a7a714/volumes" Nov 26 08:32:04 crc kubenswrapper[4909]: I1126 08:32:04.900521 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 08:32:04 crc kubenswrapper[4909]: W1126 08:32:04.903716 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5855f59_14c5_493a_ad57_d8a9cea9a517.slice/crio-149035772dd9d469824e1b749a30eedfcfd413515f7f88f053bbd4a98714b585 WatchSource:0}: Error finding container 149035772dd9d469824e1b749a30eedfcfd413515f7f88f053bbd4a98714b585: Status 404 returned error can't find the container with id 149035772dd9d469824e1b749a30eedfcfd413515f7f88f053bbd4a98714b585 Nov 26 08:32:05 crc kubenswrapper[4909]: I1126 08:32:05.016321 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5855f59-14c5-493a-ad57-d8a9cea9a517","Type":"ContainerStarted","Data":"149035772dd9d469824e1b749a30eedfcfd413515f7f88f053bbd4a98714b585"} Nov 26 08:32:05 crc kubenswrapper[4909]: I1126 08:32:05.355452 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:32:05 crc kubenswrapper[4909]: I1126 08:32:05.356291 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 08:32:06 crc kubenswrapper[4909]: I1126 08:32:06.057583 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5855f59-14c5-493a-ad57-d8a9cea9a517","Type":"ContainerStarted","Data":"ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8"} Nov 26 08:32:06 crc kubenswrapper[4909]: I1126 08:32:06.057882 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:07 crc kubenswrapper[4909]: I1126 08:32:07.498284 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:32:07 crc kubenswrapper[4909]: I1126 08:32:07.514263 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:32:07 crc kubenswrapper[4909]: I1126 08:32:07.536356 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.536330295 podStartE2EDuration="3.536330295s" podCreationTimestamp="2025-11-26 08:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:32:06.094895177 +0000 UTC m=+5498.241106383" watchObservedRunningTime="2025-11-26 08:32:07.536330295 +0000 UTC m=+5499.682541501" Nov 26 08:32:08 crc kubenswrapper[4909]: I1126 08:32:08.094952 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 26 08:32:08 crc kubenswrapper[4909]: I1126 08:32:08.215899 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 08:32:08 crc kubenswrapper[4909]: I1126 08:32:08.246796 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 08:32:09 crc kubenswrapper[4909]: I1126 08:32:09.132962 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 08:32:10 crc kubenswrapper[4909]: I1126 08:32:10.354927 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 08:32:10 crc kubenswrapper[4909]: I1126 08:32:10.355293 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 08:32:10 crc kubenswrapper[4909]: I1126 08:32:10.380961 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 08:32:10 crc kubenswrapper[4909]: I1126 08:32:10.381014 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 08:32:11 crc kubenswrapper[4909]: I1126 08:32:11.400375 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 26 08:32:11 crc kubenswrapper[4909]: I1126 08:32:11.524710 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:32:11 crc kubenswrapper[4909]: I1126 08:32:11.525149 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.97:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:32:11 crc kubenswrapper[4909]: I1126 08:32:11.525240 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:32:11 crc kubenswrapper[4909]: I1126 08:32:11.525493 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.97:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.857644 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.860608 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.864831 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.869961 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.987679 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.987752 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.987876 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.987951 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7tnx\" (UniqueName: \"kubernetes.io/projected/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-kube-api-access-m7tnx\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.988050 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:13 crc kubenswrapper[4909]: I1126 08:32:13.988075 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.090018 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.090126 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.090226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.090320 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7tnx\" (UniqueName: \"kubernetes.io/projected/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-kube-api-access-m7tnx\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.090444 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.090489 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.090771 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.096007 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.096192 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.096585 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.096734 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.117970 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7tnx\" (UniqueName: \"kubernetes.io/projected/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-kube-api-access-m7tnx\") pod \"cinder-scheduler-0\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.191809 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.452493 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 26 08:32:14 crc kubenswrapper[4909]: W1126 08:32:14.673610 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad6aaa8b_64ca_4882_b721_cb6d6a691fb7.slice/crio-1f31bffddef5ed5ff9416f6093cafd3f4d85e332814fb8aa6aa887a84c45e017 WatchSource:0}: Error finding container 1f31bffddef5ed5ff9416f6093cafd3f4d85e332814fb8aa6aa887a84c45e017: Status 404 returned error can't find the container with id 1f31bffddef5ed5ff9416f6093cafd3f4d85e332814fb8aa6aa887a84c45e017 Nov 26 08:32:14 crc kubenswrapper[4909]: I1126 08:32:14.680539 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:15 crc kubenswrapper[4909]: I1126 08:32:15.155931 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7","Type":"ContainerStarted","Data":"1f31bffddef5ed5ff9416f6093cafd3f4d85e332814fb8aa6aa887a84c45e017"} Nov 26 08:32:15 crc kubenswrapper[4909]: I1126 08:32:15.426196 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:32:15 crc kubenswrapper[4909]: I1126 08:32:15.439986 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api-log" containerID="cri-o://57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5" gracePeriod=30 Nov 26 08:32:15 crc kubenswrapper[4909]: I1126 08:32:15.440716 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api" containerID="cri-o://12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05" gracePeriod=30 Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.145445 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.147388 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.149964 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.178675 4909 generic.go:334] "Generic (PLEG): container finished" podID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerID="57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5" exitCode=143 Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.178734 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72dabde5-9582-45ca-ab0d-fa93aa5f28bd","Type":"ContainerDied","Data":"57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5"} Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.188217 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7","Type":"ContainerStarted","Data":"83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b"} Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.188246 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7","Type":"ContainerStarted","Data":"82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41"} Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.192369 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.221880 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.221862431 podStartE2EDuration="3.221862431s" podCreationTimestamp="2025-11-26 08:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:32:16.219923307 +0000 UTC m=+5508.366134473" watchObservedRunningTime="2025-11-26 08:32:16.221862431 +0000 UTC m=+5508.368073587" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.240821 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.240882 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.240932 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.240957 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.240999 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241042 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241072 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1ad3ebe0-0caa-449f-9980-0dbddd081302-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241089 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-run\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241112 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ff9\" (UniqueName: \"kubernetes.io/projected/1ad3ebe0-0caa-449f-9980-0dbddd081302-kube-api-access-n2ff9\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241131 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241151 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241183 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241237 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241260 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241278 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.241322 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344443 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344559 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344619 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1ad3ebe0-0caa-449f-9980-0dbddd081302-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344642 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-run\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344670 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2ff9\" (UniqueName: \"kubernetes.io/projected/1ad3ebe0-0caa-449f-9980-0dbddd081302-kube-api-access-n2ff9\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344693 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344720 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344756 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344819 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344849 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344870 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344890 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344919 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344948 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344950 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.344990 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.345021 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.347918 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.349835 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.349963 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.349999 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.350033 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.350068 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.350781 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.350842 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.350866 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ad3ebe0-0caa-449f-9980-0dbddd081302-run\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.351449 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.356563 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.357199 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.367973 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ad3ebe0-0caa-449f-9980-0dbddd081302-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.369237 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1ad3ebe0-0caa-449f-9980-0dbddd081302-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.372327 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2ff9\" (UniqueName: \"kubernetes.io/projected/1ad3ebe0-0caa-449f-9980-0dbddd081302-kube-api-access-n2ff9\") pod \"cinder-volume-volume1-0\" (UID: \"1ad3ebe0-0caa-449f-9980-0dbddd081302\") " pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.473010 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.819938 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.821466 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.823990 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855029 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855086 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855116 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-dev\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855159 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8js4\" (UniqueName: \"kubernetes.io/projected/d44ac0aa-a634-4189-a500-b1ead88f40e0-kube-api-access-k8js4\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855192 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-config-data\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855222 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855379 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855416 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d44ac0aa-a634-4189-a500-b1ead88f40e0-ceph\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855521 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855540 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855574 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-run\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855639 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855668 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-scripts\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855757 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855837 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-sys\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.855895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-lib-modules\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.868163 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.957708 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-sys\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958095 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-lib-modules\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.957835 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-sys\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958147 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958239 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-lib-modules\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958270 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958301 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-dev\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958358 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8js4\" (UniqueName: \"kubernetes.io/projected/d44ac0aa-a634-4189-a500-b1ead88f40e0-kube-api-access-k8js4\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958405 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-config-data\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958409 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-dev\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958456 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958504 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958530 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d44ac0aa-a634-4189-a500-b1ead88f40e0-ceph\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958577 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958611 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958613 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958652 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958668 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-run\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958697 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958708 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958734 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-scripts\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958811 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958813 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-run\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958862 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958885 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.958979 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d44ac0aa-a634-4189-a500-b1ead88f40e0-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.963687 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-scripts\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.963934 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d44ac0aa-a634-4189-a500-b1ead88f40e0-ceph\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.964071 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.968211 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.969540 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44ac0aa-a634-4189-a500-b1ead88f40e0-config-data\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:16 crc kubenswrapper[4909]: I1126 08:32:16.979049 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8js4\" (UniqueName: \"kubernetes.io/projected/d44ac0aa-a634-4189-a500-b1ead88f40e0-kube-api-access-k8js4\") pod \"cinder-backup-0\" (UID: \"d44ac0aa-a634-4189-a500-b1ead88f40e0\") " pod="openstack/cinder-backup-0" Nov 26 08:32:17 crc kubenswrapper[4909]: I1126 08:32:17.006202 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 26 08:32:17 crc kubenswrapper[4909]: W1126 08:32:17.009090 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ad3ebe0_0caa_449f_9980_0dbddd081302.slice/crio-ba383ea77ec8d4565c5ba3128c890bc8f50a539d6b95fb21ef2a2ade65e4641d WatchSource:0}: Error finding container ba383ea77ec8d4565c5ba3128c890bc8f50a539d6b95fb21ef2a2ade65e4641d: Status 404 returned error can't find the container with id ba383ea77ec8d4565c5ba3128c890bc8f50a539d6b95fb21ef2a2ade65e4641d Nov 26 08:32:17 crc kubenswrapper[4909]: I1126 08:32:17.011178 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:32:17 crc kubenswrapper[4909]: I1126 08:32:17.140035 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 26 08:32:17 crc kubenswrapper[4909]: I1126 08:32:17.204566 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1ad3ebe0-0caa-449f-9980-0dbddd081302","Type":"ContainerStarted","Data":"ba383ea77ec8d4565c5ba3128c890bc8f50a539d6b95fb21ef2a2ade65e4641d"} Nov 26 08:32:17 crc kubenswrapper[4909]: W1126 08:32:17.758815 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44ac0aa_a634_4189_a500_b1ead88f40e0.slice/crio-3ab2ff3fcdb2d1d4b290db0c183d9bd292e9c4c215112747ab27936e705afd0b WatchSource:0}: Error finding container 3ab2ff3fcdb2d1d4b290db0c183d9bd292e9c4c215112747ab27936e705afd0b: Status 404 returned error can't find the container with id 3ab2ff3fcdb2d1d4b290db0c183d9bd292e9c4c215112747ab27936e705afd0b Nov 26 08:32:17 crc kubenswrapper[4909]: I1126 08:32:17.760890 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 26 08:32:18 crc kubenswrapper[4909]: I1126 08:32:18.235243 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1ad3ebe0-0caa-449f-9980-0dbddd081302","Type":"ContainerStarted","Data":"77bbb14d6744703415771e0eb5b5e223d87752afa1be33f5b0bb476cf42608ea"} Nov 26 08:32:18 crc kubenswrapper[4909]: I1126 08:32:18.235982 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1ad3ebe0-0caa-449f-9980-0dbddd081302","Type":"ContainerStarted","Data":"466e9d8ac329961c1a73f232f6b866258b98f69e5e060d1fc4eee3db1d69533e"} Nov 26 08:32:18 crc kubenswrapper[4909]: I1126 08:32:18.237122 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d44ac0aa-a634-4189-a500-b1ead88f40e0","Type":"ContainerStarted","Data":"3ab2ff3fcdb2d1d4b290db0c183d9bd292e9c4c215112747ab27936e705afd0b"} Nov 26 08:32:18 crc kubenswrapper[4909]: I1126 08:32:18.266003 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=1.553223934 podStartE2EDuration="2.265979855s" podCreationTimestamp="2025-11-26 08:32:16 +0000 UTC" firstStartedPulling="2025-11-26 08:32:17.010969377 +0000 UTC m=+5509.157180543" lastFinishedPulling="2025-11-26 08:32:17.723725298 +0000 UTC m=+5509.869936464" observedRunningTime="2025-11-26 08:32:18.259012934 +0000 UTC m=+5510.405224090" watchObservedRunningTime="2025-11-26 08:32:18.265979855 +0000 UTC m=+5510.412191021" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.196024 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.201458 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.250471 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d44ac0aa-a634-4189-a500-b1ead88f40e0","Type":"ContainerStarted","Data":"5b1eaf59034338762ea1c4df9e8bf6a6d3df65b31744909fbe083167240380c3"} Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.250513 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d44ac0aa-a634-4189-a500-b1ead88f40e0","Type":"ContainerStarted","Data":"7c6c20677c43f0261d6efde635bc89645e2327020b6c673e9dc13dd06e17d67e"} Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.252900 4909 generic.go:334] "Generic (PLEG): container finished" podID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerID="12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05" exitCode=0 Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.252953 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.252950 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72dabde5-9582-45ca-ab0d-fa93aa5f28bd","Type":"ContainerDied","Data":"12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05"} Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.252999 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72dabde5-9582-45ca-ab0d-fa93aa5f28bd","Type":"ContainerDied","Data":"58edea080008f9b80ebaf32277d5822022b2ef2bdf680c1f6dfa9b145e9080d2"} Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.253018 4909 scope.go:117] "RemoveContainer" containerID="12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.272786 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.517585204 podStartE2EDuration="3.272769545s" podCreationTimestamp="2025-11-26 08:32:16 +0000 UTC" firstStartedPulling="2025-11-26 08:32:17.769767365 +0000 UTC m=+5509.915978541" lastFinishedPulling="2025-11-26 08:32:18.524951716 +0000 UTC m=+5510.671162882" observedRunningTime="2025-11-26 08:32:19.271263003 +0000 UTC m=+5511.417474169" watchObservedRunningTime="2025-11-26 08:32:19.272769545 +0000 UTC m=+5511.418980711" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.286853 4909 scope.go:117] "RemoveContainer" containerID="57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.316005 4909 scope.go:117] "RemoveContainer" containerID="12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05" Nov 26 08:32:19 crc kubenswrapper[4909]: E1126 08:32:19.316445 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05\": container with ID starting with 12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05 not found: ID does not exist" containerID="12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.316487 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05"} err="failed to get container status \"12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05\": rpc error: code = NotFound desc = could not find container \"12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05\": container with ID starting with 12e3f0178f4a604513c923c9f651ebf2905075b9a314460bb285163b6b067d05 not found: ID does not exist" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.316514 4909 scope.go:117] "RemoveContainer" containerID="57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5" Nov 26 08:32:19 crc kubenswrapper[4909]: E1126 08:32:19.316753 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5\": container with ID starting with 57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5 not found: ID does not exist" containerID="57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.316772 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5"} err="failed to get container status \"57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5\": rpc error: code = NotFound desc = could not find container \"57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5\": container with ID starting with 57b0c3bfe797159def71bbdf67ac21038fc177f5e2619d373e960218ba1a1de5 not found: ID does not exist" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.320353 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-scripts\") pod \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.320429 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-combined-ca-bundle\") pod \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.320500 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w2p4\" (UniqueName: \"kubernetes.io/projected/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-kube-api-access-7w2p4\") pod \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.320560 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data\") pod \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.320612 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-etc-machine-id\") pod \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.320724 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-logs\") pod \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.320763 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data-custom\") pod \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\" (UID: \"72dabde5-9582-45ca-ab0d-fa93aa5f28bd\") " Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.325184 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "72dabde5-9582-45ca-ab0d-fa93aa5f28bd" (UID: "72dabde5-9582-45ca-ab0d-fa93aa5f28bd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.325552 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-logs" (OuterVolumeSpecName: "logs") pod "72dabde5-9582-45ca-ab0d-fa93aa5f28bd" (UID: "72dabde5-9582-45ca-ab0d-fa93aa5f28bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.328404 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72dabde5-9582-45ca-ab0d-fa93aa5f28bd" (UID: "72dabde5-9582-45ca-ab0d-fa93aa5f28bd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.328458 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-scripts" (OuterVolumeSpecName: "scripts") pod "72dabde5-9582-45ca-ab0d-fa93aa5f28bd" (UID: "72dabde5-9582-45ca-ab0d-fa93aa5f28bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.328742 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-kube-api-access-7w2p4" (OuterVolumeSpecName: "kube-api-access-7w2p4") pod "72dabde5-9582-45ca-ab0d-fa93aa5f28bd" (UID: "72dabde5-9582-45ca-ab0d-fa93aa5f28bd"). InnerVolumeSpecName "kube-api-access-7w2p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.359994 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72dabde5-9582-45ca-ab0d-fa93aa5f28bd" (UID: "72dabde5-9582-45ca-ab0d-fa93aa5f28bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.411546 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data" (OuterVolumeSpecName: "config-data") pod "72dabde5-9582-45ca-ab0d-fa93aa5f28bd" (UID: "72dabde5-9582-45ca-ab0d-fa93aa5f28bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.423346 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.423374 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.423385 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.423396 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.423405 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w2p4\" (UniqueName: \"kubernetes.io/projected/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-kube-api-access-7w2p4\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.423413 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.423422 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72dabde5-9582-45ca-ab0d-fa93aa5f28bd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.583284 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.597928 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.607527 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:32:19 crc kubenswrapper[4909]: E1126 08:32:19.607991 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.608014 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api" Nov 26 08:32:19 crc kubenswrapper[4909]: E1126 08:32:19.608053 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api-log" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.608061 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api-log" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.608297 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.608330 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" containerName="cinder-api-log" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.609529 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.615173 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.620794 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.729207 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vhp\" (UniqueName: \"kubernetes.io/projected/2f91a650-01b7-47d1-9410-a47b9408c634-kube-api-access-b7vhp\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.729266 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.729375 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-config-data\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.729440 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-scripts\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.729495 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f91a650-01b7-47d1-9410-a47b9408c634-logs\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.729514 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f91a650-01b7-47d1-9410-a47b9408c634-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.729532 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.831240 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-scripts\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.831670 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f91a650-01b7-47d1-9410-a47b9408c634-logs\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.831700 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f91a650-01b7-47d1-9410-a47b9408c634-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.831723 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.831857 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f91a650-01b7-47d1-9410-a47b9408c634-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.831963 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7vhp\" (UniqueName: \"kubernetes.io/projected/2f91a650-01b7-47d1-9410-a47b9408c634-kube-api-access-b7vhp\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.832006 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.832114 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-config-data\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.833304 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f91a650-01b7-47d1-9410-a47b9408c634-logs\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.837196 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.841525 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-scripts\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.842013 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-config-data\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.847733 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f91a650-01b7-47d1-9410-a47b9408c634-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.855632 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7vhp\" (UniqueName: \"kubernetes.io/projected/2f91a650-01b7-47d1-9410-a47b9408c634-kube-api-access-b7vhp\") pod \"cinder-api-0\" (UID: \"2f91a650-01b7-47d1-9410-a47b9408c634\") " pod="openstack/cinder-api-0" Nov 26 08:32:19 crc kubenswrapper[4909]: I1126 08:32:19.929515 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.362102 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.366560 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.378744 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.393245 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.393648 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.394578 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.398037 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.432943 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 08:32:20 crc kubenswrapper[4909]: I1126 08:32:20.512500 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72dabde5-9582-45ca-ab0d-fa93aa5f28bd" path="/var/lib/kubelet/pods/72dabde5-9582-45ca-ab0d-fa93aa5f28bd/volumes" Nov 26 08:32:21 crc kubenswrapper[4909]: I1126 08:32:21.284847 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f91a650-01b7-47d1-9410-a47b9408c634","Type":"ContainerStarted","Data":"c1fe722b91b2b6b48cf69b8e3bf4f62ba1ece7af935f5353b27d7ea86b1633e4"} Nov 26 08:32:21 crc kubenswrapper[4909]: I1126 08:32:21.285129 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f91a650-01b7-47d1-9410-a47b9408c634","Type":"ContainerStarted","Data":"56d5bb24dc44438ee418ece3533cf54fe3d607e1f9506fc57c2b95b78790ac32"} Nov 26 08:32:21 crc kubenswrapper[4909]: I1126 08:32:21.285825 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 08:32:21 crc kubenswrapper[4909]: I1126 08:32:21.286962 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 08:32:21 crc kubenswrapper[4909]: I1126 08:32:21.291083 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 08:32:21 crc kubenswrapper[4909]: I1126 08:32:21.474355 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:22 crc kubenswrapper[4909]: I1126 08:32:22.140935 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 26 08:32:22 crc kubenswrapper[4909]: I1126 08:32:22.294653 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f91a650-01b7-47d1-9410-a47b9408c634","Type":"ContainerStarted","Data":"9618a51e9249edff44d98ff1f81e8d25715d79a2edfcbe1b46cbd0906be53581"} Nov 26 08:32:22 crc kubenswrapper[4909]: I1126 08:32:22.313910 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.313893802 podStartE2EDuration="3.313893802s" podCreationTimestamp="2025-11-26 08:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:32:22.311450815 +0000 UTC m=+5514.457661981" watchObservedRunningTime="2025-11-26 08:32:22.313893802 +0000 UTC m=+5514.460104968" Nov 26 08:32:23 crc kubenswrapper[4909]: I1126 08:32:23.306491 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 26 08:32:24 crc kubenswrapper[4909]: I1126 08:32:24.403522 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 26 08:32:24 crc kubenswrapper[4909]: I1126 08:32:24.460596 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:25 crc kubenswrapper[4909]: I1126 08:32:25.325269 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="cinder-scheduler" containerID="cri-o://82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41" gracePeriod=30 Nov 26 08:32:25 crc kubenswrapper[4909]: I1126 08:32:25.325870 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="probe" containerID="cri-o://83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b" gracePeriod=30 Nov 26 08:32:26 crc kubenswrapper[4909]: I1126 08:32:26.344401 4909 generic.go:334] "Generic (PLEG): container finished" podID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerID="83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b" exitCode=0 Nov 26 08:32:26 crc kubenswrapper[4909]: I1126 08:32:26.344523 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7","Type":"ContainerDied","Data":"83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b"} Nov 26 08:32:26 crc kubenswrapper[4909]: I1126 08:32:26.677990 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.229659 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.355970 4909 generic.go:334] "Generic (PLEG): container finished" podID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerID="82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41" exitCode=0 Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.356018 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7","Type":"ContainerDied","Data":"82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41"} Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.356047 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7","Type":"ContainerDied","Data":"1f31bffddef5ed5ff9416f6093cafd3f4d85e332814fb8aa6aa887a84c45e017"} Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.356068 4909 scope.go:117] "RemoveContainer" containerID="83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.356156 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.387302 4909 scope.go:117] "RemoveContainer" containerID="82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.403240 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-etc-machine-id\") pod \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.403403 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-scripts\") pod \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.403529 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-combined-ca-bundle\") pod \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.403625 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data\") pod \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.403333 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" (UID: "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.403797 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data-custom\") pod \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.404041 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7tnx\" (UniqueName: \"kubernetes.io/projected/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-kube-api-access-m7tnx\") pod \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\" (UID: \"ad6aaa8b-64ca-4882-b721-cb6d6a691fb7\") " Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.404477 4909 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.412962 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.414454 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-kube-api-access-m7tnx" (OuterVolumeSpecName: "kube-api-access-m7tnx") pod "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" (UID: "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7"). InnerVolumeSpecName "kube-api-access-m7tnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.415490 4909 scope.go:117] "RemoveContainer" containerID="83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b" Nov 26 08:32:27 crc kubenswrapper[4909]: E1126 08:32:27.416188 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b\": container with ID starting with 83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b not found: ID does not exist" containerID="83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.416229 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b"} err="failed to get container status \"83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b\": rpc error: code = NotFound desc = could not find container \"83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b\": container with ID starting with 83d9a211d1a677da48fe770ed0ae92db57fb6ca2b207ca8646248e00d048839b not found: ID does not exist" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.416251 4909 scope.go:117] "RemoveContainer" containerID="82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41" Nov 26 08:32:27 crc kubenswrapper[4909]: E1126 08:32:27.416576 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41\": container with ID starting with 82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41 not found: ID does not exist" containerID="82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.416733 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41"} err="failed to get container status \"82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41\": rpc error: code = NotFound desc = could not find container \"82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41\": container with ID starting with 82517018d766597c2b123149ff7898804dfcd47fa161d46fb149a269fa3f3f41 not found: ID does not exist" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.418951 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-scripts" (OuterVolumeSpecName: "scripts") pod "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" (UID: "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.426943 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" (UID: "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.489822 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" (UID: "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.507160 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7tnx\" (UniqueName: \"kubernetes.io/projected/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-kube-api-access-m7tnx\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.507187 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.507196 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.507204 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.544581 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data" (OuterVolumeSpecName: "config-data") pod "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" (UID: "ad6aaa8b-64ca-4882-b721-cb6d6a691fb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.609341 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.697813 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.706349 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.725445 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:27 crc kubenswrapper[4909]: E1126 08:32:27.726307 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="cinder-scheduler" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.726411 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="cinder-scheduler" Nov 26 08:32:27 crc kubenswrapper[4909]: E1126 08:32:27.726505 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="probe" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.726620 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="probe" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.726993 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="cinder-scheduler" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.727107 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" containerName="probe" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.728543 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.734457 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.738794 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.812576 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.812940 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.813153 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.813343 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.813559 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.813815 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srj6h\" (UniqueName: \"kubernetes.io/projected/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-kube-api-access-srj6h\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.915664 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srj6h\" (UniqueName: \"kubernetes.io/projected/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-kube-api-access-srj6h\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.916155 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.916226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.916294 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.916303 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.916404 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.916468 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.921212 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.921553 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.922755 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.924942 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:27 crc kubenswrapper[4909]: I1126 08:32:27.934158 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srj6h\" (UniqueName: \"kubernetes.io/projected/2fc86a6a-a136-4600-b8b4-bf7f4baa45a8-kube-api-access-srj6h\") pod \"cinder-scheduler-0\" (UID: \"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8\") " pod="openstack/cinder-scheduler-0" Nov 26 08:32:28 crc kubenswrapper[4909]: I1126 08:32:28.059355 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 08:32:28 crc kubenswrapper[4909]: I1126 08:32:28.514679 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6aaa8b-64ca-4882-b721-cb6d6a691fb7" path="/var/lib/kubelet/pods/ad6aaa8b-64ca-4882-b721-cb6d6a691fb7/volumes" Nov 26 08:32:28 crc kubenswrapper[4909]: I1126 08:32:28.685521 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 08:32:28 crc kubenswrapper[4909]: W1126 08:32:28.686219 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc86a6a_a136_4600_b8b4_bf7f4baa45a8.slice/crio-89051d614a53253966132f55cbcb89b49cc9d1064674bb3995d7e66c55bdb90d WatchSource:0}: Error finding container 89051d614a53253966132f55cbcb89b49cc9d1064674bb3995d7e66c55bdb90d: Status 404 returned error can't find the container with id 89051d614a53253966132f55cbcb89b49cc9d1064674bb3995d7e66c55bdb90d Nov 26 08:32:29 crc kubenswrapper[4909]: I1126 08:32:29.384580 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8","Type":"ContainerStarted","Data":"e182c707b1a77dbe419ac2763fa7994aa9dc3452070d369db6a3d7ed73f1fcf1"} Nov 26 08:32:29 crc kubenswrapper[4909]: I1126 08:32:29.384826 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8","Type":"ContainerStarted","Data":"89051d614a53253966132f55cbcb89b49cc9d1064674bb3995d7e66c55bdb90d"} Nov 26 08:32:30 crc kubenswrapper[4909]: I1126 08:32:30.399617 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2fc86a6a-a136-4600-b8b4-bf7f4baa45a8","Type":"ContainerStarted","Data":"e51e85ca14c9bedd6e2e04f34b1813151df10fdeb80e5af1567bccba59962514"} Nov 26 08:32:30 crc kubenswrapper[4909]: I1126 08:32:30.421247 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.421228951 podStartE2EDuration="3.421228951s" podCreationTimestamp="2025-11-26 08:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:32:30.418541128 +0000 UTC m=+5522.564752294" watchObservedRunningTime="2025-11-26 08:32:30.421228951 +0000 UTC m=+5522.567440117" Nov 26 08:32:31 crc kubenswrapper[4909]: I1126 08:32:31.813058 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 26 08:32:33 crc kubenswrapper[4909]: I1126 08:32:33.059546 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 26 08:32:38 crc kubenswrapper[4909]: I1126 08:32:38.334188 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 26 08:33:06 crc kubenswrapper[4909]: I1126 08:33:06.084106 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8vbph"] Nov 26 08:33:06 crc kubenswrapper[4909]: I1126 08:33:06.102743 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8vbph"] Nov 26 08:33:06 crc kubenswrapper[4909]: I1126 08:33:06.512431 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd240cfb-fc3a-4822-8155-475932071966" path="/var/lib/kubelet/pods/fd240cfb-fc3a-4822-8155-475932071966/volumes" Nov 26 08:33:17 crc kubenswrapper[4909]: I1126 08:33:17.035427 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e378-account-create-lc99q"] Nov 26 08:33:17 crc kubenswrapper[4909]: I1126 08:33:17.044321 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e378-account-create-lc99q"] Nov 26 08:33:18 crc kubenswrapper[4909]: I1126 08:33:18.509480 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44da67f6-1772-47c0-9bbd-d2b793f0a84e" path="/var/lib/kubelet/pods/44da67f6-1772-47c0-9bbd-d2b793f0a84e/volumes" Nov 26 08:33:23 crc kubenswrapper[4909]: I1126 08:33:23.080018 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hsxh2"] Nov 26 08:33:23 crc kubenswrapper[4909]: I1126 08:33:23.098485 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hsxh2"] Nov 26 08:33:24 crc kubenswrapper[4909]: I1126 08:33:24.514112 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74022312-19ed-4ed7-b5d7-03a842e1de8e" path="/var/lib/kubelet/pods/74022312-19ed-4ed7-b5d7-03a842e1de8e/volumes" Nov 26 08:33:37 crc kubenswrapper[4909]: I1126 08:33:37.045821 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hr7tj"] Nov 26 08:33:37 crc kubenswrapper[4909]: I1126 08:33:37.054809 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hr7tj"] Nov 26 08:33:37 crc kubenswrapper[4909]: I1126 08:33:37.300954 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:33:37 crc kubenswrapper[4909]: I1126 08:33:37.301053 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:33:38 crc kubenswrapper[4909]: I1126 08:33:38.513940 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e63ca5-fc22-4174-b9d3-bb47fa838467" path="/var/lib/kubelet/pods/d0e63ca5-fc22-4174-b9d3-bb47fa838467/volumes" Nov 26 08:33:38 crc kubenswrapper[4909]: I1126 08:33:38.749039 4909 scope.go:117] "RemoveContainer" containerID="8794fb03db07a049ffa784e3a45079b185a38d397cdd470d942bbb017487b203" Nov 26 08:33:38 crc kubenswrapper[4909]: I1126 08:33:38.794314 4909 scope.go:117] "RemoveContainer" containerID="67b87a264811d9c1603b864be639c0472010eae91263b0459ac1761d283ebeb6" Nov 26 08:33:38 crc kubenswrapper[4909]: I1126 08:33:38.824327 4909 scope.go:117] "RemoveContainer" containerID="4e65a48844d9443182612718e74b6adb9e165066ec8f7ab159ee9c091f238d94" Nov 26 08:33:38 crc kubenswrapper[4909]: I1126 08:33:38.857062 4909 scope.go:117] "RemoveContainer" containerID="1b07b68e8c6b5e4b739af19646ff3e390df5ac14b6618e916ee4799bd9f9de29" Nov 26 08:34:07 crc kubenswrapper[4909]: I1126 08:34:07.301557 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:34:07 crc kubenswrapper[4909]: I1126 08:34:07.302089 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.481403 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8jkx8"] Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.483665 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.486552 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.486787 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-88jfp" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.492272 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-run\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.492573 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-run-ovn\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.492713 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxqc\" (UniqueName: \"kubernetes.io/projected/1af0814b-2284-43ed-b8bc-91736abd63ac-kube-api-access-zhxqc\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.492860 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-log-ovn\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.492978 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af0814b-2284-43ed-b8bc-91736abd63ac-scripts\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.540982 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-79cx4"] Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.543122 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jkx8"] Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.543251 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.552861 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-79cx4"] Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601121 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-log-ovn\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601234 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af0814b-2284-43ed-b8bc-91736abd63ac-scripts\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601346 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-run\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601383 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-log\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601416 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9vw\" (UniqueName: \"kubernetes.io/projected/88c8deef-c53d-48d5-8716-4614abbd88e0-kube-api-access-5b9vw\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601502 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-lib\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-run\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601705 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-run-ovn\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601769 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-etc-ovs\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601798 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxqc\" (UniqueName: \"kubernetes.io/projected/1af0814b-2284-43ed-b8bc-91736abd63ac-kube-api-access-zhxqc\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.601848 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88c8deef-c53d-48d5-8716-4614abbd88e0-scripts\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.602312 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-log-ovn\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.604291 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1af0814b-2284-43ed-b8bc-91736abd63ac-scripts\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.604387 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-run\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.604425 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1af0814b-2284-43ed-b8bc-91736abd63ac-var-run-ovn\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.623877 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxqc\" (UniqueName: \"kubernetes.io/projected/1af0814b-2284-43ed-b8bc-91736abd63ac-kube-api-access-zhxqc\") pod \"ovn-controller-8jkx8\" (UID: \"1af0814b-2284-43ed-b8bc-91736abd63ac\") " pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703248 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-run\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703322 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-run\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703352 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-log\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703387 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b9vw\" (UniqueName: \"kubernetes.io/projected/88c8deef-c53d-48d5-8716-4614abbd88e0-kube-api-access-5b9vw\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703481 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-lib\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703498 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-log\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703673 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-var-lib\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703682 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-etc-ovs\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703730 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88c8deef-c53d-48d5-8716-4614abbd88e0-etc-ovs\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.703757 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88c8deef-c53d-48d5-8716-4614abbd88e0-scripts\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.705680 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88c8deef-c53d-48d5-8716-4614abbd88e0-scripts\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.720368 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b9vw\" (UniqueName: \"kubernetes.io/projected/88c8deef-c53d-48d5-8716-4614abbd88e0-kube-api-access-5b9vw\") pod \"ovn-controller-ovs-79cx4\" (UID: \"88c8deef-c53d-48d5-8716-4614abbd88e0\") " pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.805723 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:18 crc kubenswrapper[4909]: I1126 08:34:18.858492 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.154165 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jkx8"] Nov 26 08:34:19 crc kubenswrapper[4909]: W1126 08:34:19.165825 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1af0814b_2284_43ed_b8bc_91736abd63ac.slice/crio-701e34de0c6e2245f84d78f026d0fa62259249a2e842fce6a04baacd147cbac5 WatchSource:0}: Error finding container 701e34de0c6e2245f84d78f026d0fa62259249a2e842fce6a04baacd147cbac5: Status 404 returned error can't find the container with id 701e34de0c6e2245f84d78f026d0fa62259249a2e842fce6a04baacd147cbac5 Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.568239 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-79cx4"] Nov 26 08:34:19 crc kubenswrapper[4909]: W1126 08:34:19.569660 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88c8deef_c53d_48d5_8716_4614abbd88e0.slice/crio-972abf77c37d6da594042c58eebe2e251c8018f5b93002ce03946744ec9031ff WatchSource:0}: Error finding container 972abf77c37d6da594042c58eebe2e251c8018f5b93002ce03946744ec9031ff: Status 404 returned error can't find the container with id 972abf77c37d6da594042c58eebe2e251c8018f5b93002ce03946744ec9031ff Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.596739 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-79cx4" event={"ID":"88c8deef-c53d-48d5-8716-4614abbd88e0","Type":"ContainerStarted","Data":"972abf77c37d6da594042c58eebe2e251c8018f5b93002ce03946744ec9031ff"} Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.598908 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jkx8" event={"ID":"1af0814b-2284-43ed-b8bc-91736abd63ac","Type":"ContainerStarted","Data":"fbdd232e13c9a7787968634fad69b21f94739a10d87026e6aa8f7be5ffbb4b76"} Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.598935 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jkx8" event={"ID":"1af0814b-2284-43ed-b8bc-91736abd63ac","Type":"ContainerStarted","Data":"701e34de0c6e2245f84d78f026d0fa62259249a2e842fce6a04baacd147cbac5"} Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.599877 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8jkx8" Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.639320 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8jkx8" podStartSLOduration=1.6393000949999998 podStartE2EDuration="1.639300095s" podCreationTimestamp="2025-11-26 08:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:34:19.617771857 +0000 UTC m=+5631.763983013" watchObservedRunningTime="2025-11-26 08:34:19.639300095 +0000 UTC m=+5631.785511261" Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.943400 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tzvzv"] Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.944967 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.947924 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 26 08:34:19 crc kubenswrapper[4909]: I1126 08:34:19.959683 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tzvzv"] Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.033795 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5tt\" (UniqueName: \"kubernetes.io/projected/c4bbdf2b-e4c8-4453-b471-c11f1421d401-kube-api-access-nc5tt\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.033854 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bbdf2b-e4c8-4453-b471-c11f1421d401-config\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.033924 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c4bbdf2b-e4c8-4453-b471-c11f1421d401-ovn-rundir\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.034551 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c4bbdf2b-e4c8-4453-b471-c11f1421d401-ovs-rundir\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.136140 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c4bbdf2b-e4c8-4453-b471-c11f1421d401-ovs-rundir\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.136404 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5tt\" (UniqueName: \"kubernetes.io/projected/c4bbdf2b-e4c8-4453-b471-c11f1421d401-kube-api-access-nc5tt\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.136430 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bbdf2b-e4c8-4453-b471-c11f1421d401-config\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.136484 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c4bbdf2b-e4c8-4453-b471-c11f1421d401-ovn-rundir\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.136621 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c4bbdf2b-e4c8-4453-b471-c11f1421d401-ovs-rundir\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.136646 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c4bbdf2b-e4c8-4453-b471-c11f1421d401-ovn-rundir\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.137545 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bbdf2b-e4c8-4453-b471-c11f1421d401-config\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.155686 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5tt\" (UniqueName: \"kubernetes.io/projected/c4bbdf2b-e4c8-4453-b471-c11f1421d401-kube-api-access-nc5tt\") pod \"ovn-controller-metrics-tzvzv\" (UID: \"c4bbdf2b-e4c8-4453-b471-c11f1421d401\") " pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.263971 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tzvzv" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.452470 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-5hhlc"] Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.453960 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5hhlc" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.479130 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5hhlc"] Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.543260 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjjgp\" (UniqueName: \"kubernetes.io/projected/7e0cc19d-c7f8-467c-ad24-8217faef3b6f-kube-api-access-pjjgp\") pod \"octavia-db-create-5hhlc\" (UID: \"7e0cc19d-c7f8-467c-ad24-8217faef3b6f\") " pod="openstack/octavia-db-create-5hhlc" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.614757 4909 generic.go:334] "Generic (PLEG): container finished" podID="88c8deef-c53d-48d5-8716-4614abbd88e0" containerID="c9af8835f901eac926e54adddc52014c03dfbfde3fd767150e664bca1321514a" exitCode=0 Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.614846 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-79cx4" event={"ID":"88c8deef-c53d-48d5-8716-4614abbd88e0","Type":"ContainerDied","Data":"c9af8835f901eac926e54adddc52014c03dfbfde3fd767150e664bca1321514a"} Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.645513 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjjgp\" (UniqueName: \"kubernetes.io/projected/7e0cc19d-c7f8-467c-ad24-8217faef3b6f-kube-api-access-pjjgp\") pod \"octavia-db-create-5hhlc\" (UID: \"7e0cc19d-c7f8-467c-ad24-8217faef3b6f\") " pod="openstack/octavia-db-create-5hhlc" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.663583 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjjgp\" (UniqueName: \"kubernetes.io/projected/7e0cc19d-c7f8-467c-ad24-8217faef3b6f-kube-api-access-pjjgp\") pod \"octavia-db-create-5hhlc\" (UID: \"7e0cc19d-c7f8-467c-ad24-8217faef3b6f\") " pod="openstack/octavia-db-create-5hhlc" Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.757414 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tzvzv"] Nov 26 08:34:20 crc kubenswrapper[4909]: I1126 08:34:20.801730 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5hhlc" Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.249112 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5hhlc"] Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.626292 4909 generic.go:334] "Generic (PLEG): container finished" podID="7e0cc19d-c7f8-467c-ad24-8217faef3b6f" containerID="65209f94e941abfd4b3f3daac56b2b6aca783e4301670e42d1d7336549a52d9c" exitCode=0 Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.626369 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5hhlc" event={"ID":"7e0cc19d-c7f8-467c-ad24-8217faef3b6f","Type":"ContainerDied","Data":"65209f94e941abfd4b3f3daac56b2b6aca783e4301670e42d1d7336549a52d9c"} Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.626621 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5hhlc" event={"ID":"7e0cc19d-c7f8-467c-ad24-8217faef3b6f","Type":"ContainerStarted","Data":"c29e9d080989b0513c3594b74e35137c029c61dd9b2c777a80a9e0fbc3e34925"} Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.629687 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-79cx4" event={"ID":"88c8deef-c53d-48d5-8716-4614abbd88e0","Type":"ContainerStarted","Data":"897bddc742577fe6724665361ea5a04bbbf012f4279dec41ccc61cf75f5a4980"} Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.629740 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-79cx4" event={"ID":"88c8deef-c53d-48d5-8716-4614abbd88e0","Type":"ContainerStarted","Data":"9940151763b97e902e298766b4689135d616da410ec5135cf22c34009dfd91b4"} Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.630234 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.630268 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.633175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tzvzv" event={"ID":"c4bbdf2b-e4c8-4453-b471-c11f1421d401","Type":"ContainerStarted","Data":"a3349da09160795fe0528d8f121d22752ebfae3ee396b750a94c4339ad65ac1b"} Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.633231 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tzvzv" event={"ID":"c4bbdf2b-e4c8-4453-b471-c11f1421d401","Type":"ContainerStarted","Data":"c8ae92bee760cbaa43af25aa7a01b8ef682427af1dc9bfc1e820fbfb797f9111"} Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.680398 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-79cx4" podStartSLOduration=3.680380285 podStartE2EDuration="3.680380285s" podCreationTimestamp="2025-11-26 08:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:34:21.669825688 +0000 UTC m=+5633.816036884" watchObservedRunningTime="2025-11-26 08:34:21.680380285 +0000 UTC m=+5633.826591471" Nov 26 08:34:21 crc kubenswrapper[4909]: I1126 08:34:21.696988 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tzvzv" podStartSLOduration=2.696967648 podStartE2EDuration="2.696967648s" podCreationTimestamp="2025-11-26 08:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:34:21.692033333 +0000 UTC m=+5633.838244519" watchObservedRunningTime="2025-11-26 08:34:21.696967648 +0000 UTC m=+5633.843178814" Nov 26 08:34:23 crc kubenswrapper[4909]: I1126 08:34:23.024055 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5hhlc" Nov 26 08:34:23 crc kubenswrapper[4909]: I1126 08:34:23.103145 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjjgp\" (UniqueName: \"kubernetes.io/projected/7e0cc19d-c7f8-467c-ad24-8217faef3b6f-kube-api-access-pjjgp\") pod \"7e0cc19d-c7f8-467c-ad24-8217faef3b6f\" (UID: \"7e0cc19d-c7f8-467c-ad24-8217faef3b6f\") " Nov 26 08:34:23 crc kubenswrapper[4909]: I1126 08:34:23.108790 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0cc19d-c7f8-467c-ad24-8217faef3b6f-kube-api-access-pjjgp" (OuterVolumeSpecName: "kube-api-access-pjjgp") pod "7e0cc19d-c7f8-467c-ad24-8217faef3b6f" (UID: "7e0cc19d-c7f8-467c-ad24-8217faef3b6f"). InnerVolumeSpecName "kube-api-access-pjjgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:34:23 crc kubenswrapper[4909]: I1126 08:34:23.205847 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjjgp\" (UniqueName: \"kubernetes.io/projected/7e0cc19d-c7f8-467c-ad24-8217faef3b6f-kube-api-access-pjjgp\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:23 crc kubenswrapper[4909]: I1126 08:34:23.655046 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5hhlc" event={"ID":"7e0cc19d-c7f8-467c-ad24-8217faef3b6f","Type":"ContainerDied","Data":"c29e9d080989b0513c3594b74e35137c029c61dd9b2c777a80a9e0fbc3e34925"} Nov 26 08:34:23 crc kubenswrapper[4909]: I1126 08:34:23.655090 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c29e9d080989b0513c3594b74e35137c029c61dd9b2c777a80a9e0fbc3e34925" Nov 26 08:34:23 crc kubenswrapper[4909]: I1126 08:34:23.655112 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5hhlc" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.584232 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-d8df-account-create-vbm7m"] Nov 26 08:34:31 crc kubenswrapper[4909]: E1126 08:34:31.585258 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0cc19d-c7f8-467c-ad24-8217faef3b6f" containerName="mariadb-database-create" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.585277 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0cc19d-c7f8-467c-ad24-8217faef3b6f" containerName="mariadb-database-create" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.585551 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0cc19d-c7f8-467c-ad24-8217faef3b6f" containerName="mariadb-database-create" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.586292 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d8df-account-create-vbm7m" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.605901 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d8df-account-create-vbm7m"] Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.621209 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.724728 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94s6\" (UniqueName: \"kubernetes.io/projected/b2987232-00ac-4875-9064-2702269119f7-kube-api-access-v94s6\") pod \"octavia-d8df-account-create-vbm7m\" (UID: \"b2987232-00ac-4875-9064-2702269119f7\") " pod="openstack/octavia-d8df-account-create-vbm7m" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.826328 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v94s6\" (UniqueName: \"kubernetes.io/projected/b2987232-00ac-4875-9064-2702269119f7-kube-api-access-v94s6\") pod \"octavia-d8df-account-create-vbm7m\" (UID: \"b2987232-00ac-4875-9064-2702269119f7\") " pod="openstack/octavia-d8df-account-create-vbm7m" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.850923 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v94s6\" (UniqueName: \"kubernetes.io/projected/b2987232-00ac-4875-9064-2702269119f7-kube-api-access-v94s6\") pod \"octavia-d8df-account-create-vbm7m\" (UID: \"b2987232-00ac-4875-9064-2702269119f7\") " pod="openstack/octavia-d8df-account-create-vbm7m" Nov 26 08:34:31 crc kubenswrapper[4909]: I1126 08:34:31.937807 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d8df-account-create-vbm7m" Nov 26 08:34:32 crc kubenswrapper[4909]: I1126 08:34:32.415009 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d8df-account-create-vbm7m"] Nov 26 08:34:32 crc kubenswrapper[4909]: W1126 08:34:32.416993 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2987232_00ac_4875_9064_2702269119f7.slice/crio-18c181e90874efd528f65a0f985c295b4a32f71df47e96cefa2a97ea428931d5 WatchSource:0}: Error finding container 18c181e90874efd528f65a0f985c295b4a32f71df47e96cefa2a97ea428931d5: Status 404 returned error can't find the container with id 18c181e90874efd528f65a0f985c295b4a32f71df47e96cefa2a97ea428931d5 Nov 26 08:34:32 crc kubenswrapper[4909]: I1126 08:34:32.771396 4909 generic.go:334] "Generic (PLEG): container finished" podID="b2987232-00ac-4875-9064-2702269119f7" containerID="28b163d0e748eabfffc0e4ca848bfe51a2c3b38e6ade0ac1e77fbdf8fa34570b" exitCode=0 Nov 26 08:34:32 crc kubenswrapper[4909]: I1126 08:34:32.771449 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d8df-account-create-vbm7m" event={"ID":"b2987232-00ac-4875-9064-2702269119f7","Type":"ContainerDied","Data":"28b163d0e748eabfffc0e4ca848bfe51a2c3b38e6ade0ac1e77fbdf8fa34570b"} Nov 26 08:34:32 crc kubenswrapper[4909]: I1126 08:34:32.771481 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d8df-account-create-vbm7m" event={"ID":"b2987232-00ac-4875-9064-2702269119f7","Type":"ContainerStarted","Data":"18c181e90874efd528f65a0f985c295b4a32f71df47e96cefa2a97ea428931d5"} Nov 26 08:34:34 crc kubenswrapper[4909]: I1126 08:34:34.204760 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d8df-account-create-vbm7m" Nov 26 08:34:34 crc kubenswrapper[4909]: I1126 08:34:34.280406 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v94s6\" (UniqueName: \"kubernetes.io/projected/b2987232-00ac-4875-9064-2702269119f7-kube-api-access-v94s6\") pod \"b2987232-00ac-4875-9064-2702269119f7\" (UID: \"b2987232-00ac-4875-9064-2702269119f7\") " Nov 26 08:34:34 crc kubenswrapper[4909]: I1126 08:34:34.287189 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2987232-00ac-4875-9064-2702269119f7-kube-api-access-v94s6" (OuterVolumeSpecName: "kube-api-access-v94s6") pod "b2987232-00ac-4875-9064-2702269119f7" (UID: "b2987232-00ac-4875-9064-2702269119f7"). InnerVolumeSpecName "kube-api-access-v94s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:34:34 crc kubenswrapper[4909]: I1126 08:34:34.382206 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v94s6\" (UniqueName: \"kubernetes.io/projected/b2987232-00ac-4875-9064-2702269119f7-kube-api-access-v94s6\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:34 crc kubenswrapper[4909]: I1126 08:34:34.803101 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d8df-account-create-vbm7m" event={"ID":"b2987232-00ac-4875-9064-2702269119f7","Type":"ContainerDied","Data":"18c181e90874efd528f65a0f985c295b4a32f71df47e96cefa2a97ea428931d5"} Nov 26 08:34:34 crc kubenswrapper[4909]: I1126 08:34:34.803180 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c181e90874efd528f65a0f985c295b4a32f71df47e96cefa2a97ea428931d5" Nov 26 08:34:34 crc kubenswrapper[4909]: I1126 08:34:34.803210 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d8df-account-create-vbm7m" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.301085 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.301399 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.301438 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.301920 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.301962 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" gracePeriod=600 Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.451778 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-dtc4m"] Nov 26 08:34:37 crc kubenswrapper[4909]: E1126 08:34:37.453088 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2987232-00ac-4875-9064-2702269119f7" containerName="mariadb-account-create" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.453315 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2987232-00ac-4875-9064-2702269119f7" containerName="mariadb-account-create" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.453983 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2987232-00ac-4875-9064-2702269119f7" containerName="mariadb-account-create" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.455636 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-dtc4m" Nov 26 08:34:37 crc kubenswrapper[4909]: E1126 08:34:37.456985 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.460521 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-dtc4m"] Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.553286 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpg9d\" (UniqueName: \"kubernetes.io/projected/96be849b-680f-4f74-9855-3194ef1b3969-kube-api-access-tpg9d\") pod \"octavia-persistence-db-create-dtc4m\" (UID: \"96be849b-680f-4f74-9855-3194ef1b3969\") " pod="openstack/octavia-persistence-db-create-dtc4m" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.656885 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpg9d\" (UniqueName: \"kubernetes.io/projected/96be849b-680f-4f74-9855-3194ef1b3969-kube-api-access-tpg9d\") pod \"octavia-persistence-db-create-dtc4m\" (UID: \"96be849b-680f-4f74-9855-3194ef1b3969\") " pod="openstack/octavia-persistence-db-create-dtc4m" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.701832 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpg9d\" (UniqueName: \"kubernetes.io/projected/96be849b-680f-4f74-9855-3194ef1b3969-kube-api-access-tpg9d\") pod \"octavia-persistence-db-create-dtc4m\" (UID: \"96be849b-680f-4f74-9855-3194ef1b3969\") " pod="openstack/octavia-persistence-db-create-dtc4m" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.800202 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-dtc4m" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.834929 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" exitCode=0 Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.834974 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed"} Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.835269 4909 scope.go:117] "RemoveContainer" containerID="e5a2ca2aa716ec654cdedcf8d1dd83703540811f543055e7744242b9dcfda8f7" Nov 26 08:34:37 crc kubenswrapper[4909]: I1126 08:34:37.836231 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:34:37 crc kubenswrapper[4909]: E1126 08:34:37.836703 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:34:38 crc kubenswrapper[4909]: I1126 08:34:38.295506 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-dtc4m"] Nov 26 08:34:38 crc kubenswrapper[4909]: W1126 08:34:38.299312 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96be849b_680f_4f74_9855_3194ef1b3969.slice/crio-ffbd60e53ec086924168f85487bfab462e4a0caf92dc285c7ed5978844b79b2d WatchSource:0}: Error finding container ffbd60e53ec086924168f85487bfab462e4a0caf92dc285c7ed5978844b79b2d: Status 404 returned error can't find the container with id ffbd60e53ec086924168f85487bfab462e4a0caf92dc285c7ed5978844b79b2d Nov 26 08:34:38 crc kubenswrapper[4909]: I1126 08:34:38.867994 4909 generic.go:334] "Generic (PLEG): container finished" podID="96be849b-680f-4f74-9855-3194ef1b3969" containerID="9fc7548e3a2d3ba71a5b6f31b0cdb4e60a3aa82b70ede327a986289af30a2887" exitCode=0 Nov 26 08:34:38 crc kubenswrapper[4909]: I1126 08:34:38.868089 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-dtc4m" event={"ID":"96be849b-680f-4f74-9855-3194ef1b3969","Type":"ContainerDied","Data":"9fc7548e3a2d3ba71a5b6f31b0cdb4e60a3aa82b70ede327a986289af30a2887"} Nov 26 08:34:38 crc kubenswrapper[4909]: I1126 08:34:38.868433 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-dtc4m" event={"ID":"96be849b-680f-4f74-9855-3194ef1b3969","Type":"ContainerStarted","Data":"ffbd60e53ec086924168f85487bfab462e4a0caf92dc285c7ed5978844b79b2d"} Nov 26 08:34:40 crc kubenswrapper[4909]: I1126 08:34:40.228514 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-dtc4m" Nov 26 08:34:40 crc kubenswrapper[4909]: I1126 08:34:40.306077 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpg9d\" (UniqueName: \"kubernetes.io/projected/96be849b-680f-4f74-9855-3194ef1b3969-kube-api-access-tpg9d\") pod \"96be849b-680f-4f74-9855-3194ef1b3969\" (UID: \"96be849b-680f-4f74-9855-3194ef1b3969\") " Nov 26 08:34:40 crc kubenswrapper[4909]: I1126 08:34:40.311746 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96be849b-680f-4f74-9855-3194ef1b3969-kube-api-access-tpg9d" (OuterVolumeSpecName: "kube-api-access-tpg9d") pod "96be849b-680f-4f74-9855-3194ef1b3969" (UID: "96be849b-680f-4f74-9855-3194ef1b3969"). InnerVolumeSpecName "kube-api-access-tpg9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:34:40 crc kubenswrapper[4909]: I1126 08:34:40.408882 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpg9d\" (UniqueName: \"kubernetes.io/projected/96be849b-680f-4f74-9855-3194ef1b3969-kube-api-access-tpg9d\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:40 crc kubenswrapper[4909]: I1126 08:34:40.897652 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-dtc4m" event={"ID":"96be849b-680f-4f74-9855-3194ef1b3969","Type":"ContainerDied","Data":"ffbd60e53ec086924168f85487bfab462e4a0caf92dc285c7ed5978844b79b2d"} Nov 26 08:34:40 crc kubenswrapper[4909]: I1126 08:34:40.897720 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffbd60e53ec086924168f85487bfab462e4a0caf92dc285c7ed5978844b79b2d" Nov 26 08:34:40 crc kubenswrapper[4909]: I1126 08:34:40.897730 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-dtc4m" Nov 26 08:34:47 crc kubenswrapper[4909]: I1126 08:34:47.982769 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-5358-account-create-rljm2"] Nov 26 08:34:47 crc kubenswrapper[4909]: E1126 08:34:47.985121 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96be849b-680f-4f74-9855-3194ef1b3969" containerName="mariadb-database-create" Nov 26 08:34:47 crc kubenswrapper[4909]: I1126 08:34:47.985159 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="96be849b-680f-4f74-9855-3194ef1b3969" containerName="mariadb-database-create" Nov 26 08:34:47 crc kubenswrapper[4909]: I1126 08:34:47.985358 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="96be849b-680f-4f74-9855-3194ef1b3969" containerName="mariadb-database-create" Nov 26 08:34:47 crc kubenswrapper[4909]: I1126 08:34:47.986107 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5358-account-create-rljm2" Nov 26 08:34:47 crc kubenswrapper[4909]: I1126 08:34:47.991798 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Nov 26 08:34:48 crc kubenswrapper[4909]: I1126 08:34:48.008455 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-5358-account-create-rljm2"] Nov 26 08:34:48 crc kubenswrapper[4909]: I1126 08:34:48.090119 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77s6b\" (UniqueName: \"kubernetes.io/projected/d12d5303-4df7-444c-a843-699aacd819b8-kube-api-access-77s6b\") pod \"octavia-5358-account-create-rljm2\" (UID: \"d12d5303-4df7-444c-a843-699aacd819b8\") " pod="openstack/octavia-5358-account-create-rljm2" Nov 26 08:34:48 crc kubenswrapper[4909]: I1126 08:34:48.192890 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77s6b\" (UniqueName: \"kubernetes.io/projected/d12d5303-4df7-444c-a843-699aacd819b8-kube-api-access-77s6b\") pod \"octavia-5358-account-create-rljm2\" (UID: \"d12d5303-4df7-444c-a843-699aacd819b8\") " pod="openstack/octavia-5358-account-create-rljm2" Nov 26 08:34:48 crc kubenswrapper[4909]: I1126 08:34:48.218757 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77s6b\" (UniqueName: \"kubernetes.io/projected/d12d5303-4df7-444c-a843-699aacd819b8-kube-api-access-77s6b\") pod \"octavia-5358-account-create-rljm2\" (UID: \"d12d5303-4df7-444c-a843-699aacd819b8\") " pod="openstack/octavia-5358-account-create-rljm2" Nov 26 08:34:48 crc kubenswrapper[4909]: I1126 08:34:48.313476 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5358-account-create-rljm2" Nov 26 08:34:48 crc kubenswrapper[4909]: I1126 08:34:48.834009 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-5358-account-create-rljm2"] Nov 26 08:34:48 crc kubenswrapper[4909]: I1126 08:34:48.991917 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5358-account-create-rljm2" event={"ID":"d12d5303-4df7-444c-a843-699aacd819b8","Type":"ContainerStarted","Data":"1ca85b16e63192106dc55ddc42808d1071efad4f749feefbf10fd8e4e267a3fb"} Nov 26 08:34:50 crc kubenswrapper[4909]: I1126 08:34:50.003571 4909 generic.go:334] "Generic (PLEG): container finished" podID="d12d5303-4df7-444c-a843-699aacd819b8" containerID="4fe18fa3f527cffb7a86c2dd69516ea7fb496b60b13e1f884c03dfce26456a2a" exitCode=0 Nov 26 08:34:50 crc kubenswrapper[4909]: I1126 08:34:50.003641 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5358-account-create-rljm2" event={"ID":"d12d5303-4df7-444c-a843-699aacd819b8","Type":"ContainerDied","Data":"4fe18fa3f527cffb7a86c2dd69516ea7fb496b60b13e1f884c03dfce26456a2a"} Nov 26 08:34:50 crc kubenswrapper[4909]: I1126 08:34:50.499358 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:34:50 crc kubenswrapper[4909]: E1126 08:34:50.499770 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:34:51 crc kubenswrapper[4909]: I1126 08:34:51.679281 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5358-account-create-rljm2" Nov 26 08:34:51 crc kubenswrapper[4909]: I1126 08:34:51.766282 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77s6b\" (UniqueName: \"kubernetes.io/projected/d12d5303-4df7-444c-a843-699aacd819b8-kube-api-access-77s6b\") pod \"d12d5303-4df7-444c-a843-699aacd819b8\" (UID: \"d12d5303-4df7-444c-a843-699aacd819b8\") " Nov 26 08:34:51 crc kubenswrapper[4909]: I1126 08:34:51.772618 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12d5303-4df7-444c-a843-699aacd819b8-kube-api-access-77s6b" (OuterVolumeSpecName: "kube-api-access-77s6b") pod "d12d5303-4df7-444c-a843-699aacd819b8" (UID: "d12d5303-4df7-444c-a843-699aacd819b8"). InnerVolumeSpecName "kube-api-access-77s6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:34:51 crc kubenswrapper[4909]: I1126 08:34:51.868118 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77s6b\" (UniqueName: \"kubernetes.io/projected/d12d5303-4df7-444c-a843-699aacd819b8-kube-api-access-77s6b\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:52 crc kubenswrapper[4909]: I1126 08:34:52.032584 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5358-account-create-rljm2" event={"ID":"d12d5303-4df7-444c-a843-699aacd819b8","Type":"ContainerDied","Data":"1ca85b16e63192106dc55ddc42808d1071efad4f749feefbf10fd8e4e267a3fb"} Nov 26 08:34:52 crc kubenswrapper[4909]: I1126 08:34:52.033112 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ca85b16e63192106dc55ddc42808d1071efad4f749feefbf10fd8e4e267a3fb" Nov 26 08:34:52 crc kubenswrapper[4909]: I1126 08:34:52.032764 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5358-account-create-rljm2" Nov 26 08:34:53 crc kubenswrapper[4909]: I1126 08:34:53.848268 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8jkx8" podUID="1af0814b-2284-43ed-b8bc-91736abd63ac" containerName="ovn-controller" probeResult="failure" output=< Nov 26 08:34:53 crc kubenswrapper[4909]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 26 08:34:53 crc kubenswrapper[4909]: > Nov 26 08:34:53 crc kubenswrapper[4909]: I1126 08:34:53.911245 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:53 crc kubenswrapper[4909]: I1126 08:34:53.912539 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-79cx4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.152754 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8jkx8-config-k9hv4"] Nov 26 08:34:54 crc kubenswrapper[4909]: E1126 08:34:54.153230 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12d5303-4df7-444c-a843-699aacd819b8" containerName="mariadb-account-create" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.153244 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12d5303-4df7-444c-a843-699aacd819b8" containerName="mariadb-account-create" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.153499 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12d5303-4df7-444c-a843-699aacd819b8" containerName="mariadb-account-create" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.154321 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.157276 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.172952 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jkx8-config-k9hv4"] Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.208625 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-scripts\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.208735 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.208799 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwvzb\" (UniqueName: \"kubernetes.io/projected/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-kube-api-access-fwvzb\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.208880 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run-ovn\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.208901 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-log-ovn\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.208931 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-additional-scripts\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.310563 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run-ovn\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.310892 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-log-ovn\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.310944 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-additional-scripts\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.310971 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-scripts\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.310998 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-log-ovn\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.311032 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.310892 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run-ovn\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.311087 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwvzb\" (UniqueName: \"kubernetes.io/projected/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-kube-api-access-fwvzb\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.311158 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.311955 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-additional-scripts\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.313361 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-scripts\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.335265 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwvzb\" (UniqueName: \"kubernetes.io/projected/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-kube-api-access-fwvzb\") pod \"ovn-controller-8jkx8-config-k9hv4\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.472667 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.647085 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-578cc99bcb-vf9qr"] Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.656702 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.661495 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.661735 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.661860 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-l9gxh" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.682550 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-578cc99bcb-vf9qr"] Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.720630 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-config-data\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.720685 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-combined-ca-bundle\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.720708 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-scripts\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.720749 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/01af70d6-bbde-4669-b93c-c06719d58742-octavia-run\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.720845 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01af70d6-bbde-4669-b93c-c06719d58742-config-data-merged\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.822566 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01af70d6-bbde-4669-b93c-c06719d58742-config-data-merged\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.822726 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-config-data\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.822761 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-combined-ca-bundle\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.822781 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-scripts\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.822822 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/01af70d6-bbde-4669-b93c-c06719d58742-octavia-run\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.823358 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/01af70d6-bbde-4669-b93c-c06719d58742-octavia-run\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.823609 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01af70d6-bbde-4669-b93c-c06719d58742-config-data-merged\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.831670 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-scripts\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.843172 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-config-data\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.843982 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01af70d6-bbde-4669-b93c-c06719d58742-combined-ca-bundle\") pod \"octavia-api-578cc99bcb-vf9qr\" (UID: \"01af70d6-bbde-4669-b93c-c06719d58742\") " pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:54 crc kubenswrapper[4909]: I1126 08:34:54.993057 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:34:55 crc kubenswrapper[4909]: I1126 08:34:55.008355 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jkx8-config-k9hv4"] Nov 26 08:34:55 crc kubenswrapper[4909]: I1126 08:34:55.067838 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jkx8-config-k9hv4" event={"ID":"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b","Type":"ContainerStarted","Data":"1efbc83bacdf7ac44ca27bcc8310d75f526b0cf4224f3e153767797590835c2f"} Nov 26 08:34:55 crc kubenswrapper[4909]: I1126 08:34:55.482631 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-578cc99bcb-vf9qr"] Nov 26 08:34:56 crc kubenswrapper[4909]: I1126 08:34:56.078050 4909 generic.go:334] "Generic (PLEG): container finished" podID="8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" containerID="1f068b9f42a9bc0fd12b58e4a4e280abc7924725741ea0ee414cf038c41d1b28" exitCode=0 Nov 26 08:34:56 crc kubenswrapper[4909]: I1126 08:34:56.078167 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jkx8-config-k9hv4" event={"ID":"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b","Type":"ContainerDied","Data":"1f068b9f42a9bc0fd12b58e4a4e280abc7924725741ea0ee414cf038c41d1b28"} Nov 26 08:34:56 crc kubenswrapper[4909]: I1126 08:34:56.079703 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-578cc99bcb-vf9qr" event={"ID":"01af70d6-bbde-4669-b93c-c06719d58742","Type":"ContainerStarted","Data":"a2ceebe3de2f6959df5406aede7570ceae2b4e0050697ab79e5daeaf60fa617a"} Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.467159 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.581995 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run-ovn\") pod \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582093 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run\") pod \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582143 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwvzb\" (UniqueName: \"kubernetes.io/projected/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-kube-api-access-fwvzb\") pod \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582233 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-additional-scripts\") pod \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582281 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-scripts\") pod \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582332 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-log-ovn\") pod \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\" (UID: \"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b\") " Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582884 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" (UID: "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582926 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" (UID: "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.582944 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run" (OuterVolumeSpecName: "var-run") pod "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" (UID: "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.584818 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" (UID: "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.585120 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-scripts" (OuterVolumeSpecName: "scripts") pod "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" (UID: "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.593844 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-kube-api-access-fwvzb" (OuterVolumeSpecName: "kube-api-access-fwvzb") pod "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" (UID: "8de2fcfa-d1ba-4e78-870c-46ce5c700b3b"). InnerVolumeSpecName "kube-api-access-fwvzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.685238 4909 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.685288 4909 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-run\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.685307 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwvzb\" (UniqueName: \"kubernetes.io/projected/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-kube-api-access-fwvzb\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.685327 4909 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.685344 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:57 crc kubenswrapper[4909]: I1126 08:34:57.685360 4909 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 08:34:58 crc kubenswrapper[4909]: I1126 08:34:58.123264 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jkx8-config-k9hv4" Nov 26 08:34:58 crc kubenswrapper[4909]: I1126 08:34:58.124105 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jkx8-config-k9hv4" event={"ID":"8de2fcfa-d1ba-4e78-870c-46ce5c700b3b","Type":"ContainerDied","Data":"1efbc83bacdf7ac44ca27bcc8310d75f526b0cf4224f3e153767797590835c2f"} Nov 26 08:34:58 crc kubenswrapper[4909]: I1126 08:34:58.124215 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1efbc83bacdf7ac44ca27bcc8310d75f526b0cf4224f3e153767797590835c2f" Nov 26 08:34:58 crc kubenswrapper[4909]: I1126 08:34:58.566033 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8jkx8-config-k9hv4"] Nov 26 08:34:58 crc kubenswrapper[4909]: I1126 08:34:58.577787 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8jkx8-config-k9hv4"] Nov 26 08:34:58 crc kubenswrapper[4909]: I1126 08:34:58.854428 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8jkx8" Nov 26 08:35:00 crc kubenswrapper[4909]: I1126 08:35:00.513895 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" path="/var/lib/kubelet/pods/8de2fcfa-d1ba-4e78-870c-46ce5c700b3b/volumes" Nov 26 08:35:01 crc kubenswrapper[4909]: I1126 08:35:01.498936 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:35:01 crc kubenswrapper[4909]: E1126 08:35:01.499539 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:35:06 crc kubenswrapper[4909]: I1126 08:35:06.204372 4909 generic.go:334] "Generic (PLEG): container finished" podID="01af70d6-bbde-4669-b93c-c06719d58742" containerID="eaf23a3bf46239f7b4707a4888231474948fda2f8c4ee2da66d4e73a11d3c961" exitCode=0 Nov 26 08:35:06 crc kubenswrapper[4909]: I1126 08:35:06.204476 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-578cc99bcb-vf9qr" event={"ID":"01af70d6-bbde-4669-b93c-c06719d58742","Type":"ContainerDied","Data":"eaf23a3bf46239f7b4707a4888231474948fda2f8c4ee2da66d4e73a11d3c961"} Nov 26 08:35:07 crc kubenswrapper[4909]: I1126 08:35:07.255510 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-578cc99bcb-vf9qr" event={"ID":"01af70d6-bbde-4669-b93c-c06719d58742","Type":"ContainerStarted","Data":"04fc4d65272a597703298bbe48cc9502b224361107127d5c4687be4ba283abae"} Nov 26 08:35:07 crc kubenswrapper[4909]: I1126 08:35:07.256950 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-578cc99bcb-vf9qr" event={"ID":"01af70d6-bbde-4669-b93c-c06719d58742","Type":"ContainerStarted","Data":"a7746231851ac4d38ad60b98ae61571ba4b572ec9fd1dc7c9385e7d1e68ad486"} Nov 26 08:35:07 crc kubenswrapper[4909]: I1126 08:35:07.257107 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:35:07 crc kubenswrapper[4909]: I1126 08:35:07.257275 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:35:07 crc kubenswrapper[4909]: I1126 08:35:07.286243 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-578cc99bcb-vf9qr" podStartSLOduration=3.5252070460000002 podStartE2EDuration="13.286221228s" podCreationTimestamp="2025-11-26 08:34:54 +0000 UTC" firstStartedPulling="2025-11-26 08:34:55.490747036 +0000 UTC m=+5667.636958212" lastFinishedPulling="2025-11-26 08:35:05.251761188 +0000 UTC m=+5677.397972394" observedRunningTime="2025-11-26 08:35:07.280248045 +0000 UTC m=+5679.426459221" watchObservedRunningTime="2025-11-26 08:35:07.286221228 +0000 UTC m=+5679.432432404" Nov 26 08:35:15 crc kubenswrapper[4909]: I1126 08:35:15.499115 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:35:15 crc kubenswrapper[4909]: E1126 08:35:15.500148 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.578760 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-wzr7s"] Nov 26 08:35:17 crc kubenswrapper[4909]: E1126 08:35:17.579542 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" containerName="ovn-config" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.579560 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" containerName="ovn-config" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.579861 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de2fcfa-d1ba-4e78-870c-46ce5c700b3b" containerName="ovn-config" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.581123 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.583224 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.583575 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.585339 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.588874 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-wzr7s"] Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.611953 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b481a95e-cbfb-446b-9229-3dff4536d732-config-data-merged\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.612012 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b481a95e-cbfb-446b-9229-3dff4536d732-hm-ports\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.612218 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b481a95e-cbfb-446b-9229-3dff4536d732-scripts\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.612248 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b481a95e-cbfb-446b-9229-3dff4536d732-config-data\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.714774 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b481a95e-cbfb-446b-9229-3dff4536d732-scripts\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.715088 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b481a95e-cbfb-446b-9229-3dff4536d732-config-data\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.715286 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b481a95e-cbfb-446b-9229-3dff4536d732-config-data-merged\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.715388 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b481a95e-cbfb-446b-9229-3dff4536d732-hm-ports\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.716368 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b481a95e-cbfb-446b-9229-3dff4536d732-config-data-merged\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.716681 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b481a95e-cbfb-446b-9229-3dff4536d732-hm-ports\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.722217 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b481a95e-cbfb-446b-9229-3dff4536d732-config-data\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.722988 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b481a95e-cbfb-446b-9229-3dff4536d732-scripts\") pod \"octavia-rsyslog-wzr7s\" (UID: \"b481a95e-cbfb-446b-9229-3dff4536d732\") " pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:17 crc kubenswrapper[4909]: I1126 08:35:17.932641 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.202156 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-jjh5v"] Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.205335 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.209661 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.217038 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-jjh5v"] Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.226187 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/4c4b9c35-bd06-455d-a579-a7e0e2532c91-amphora-image\") pod \"octavia-image-upload-59f8cff499-jjh5v\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.226830 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c4b9c35-bd06-455d-a579-a7e0e2532c91-httpd-config\") pod \"octavia-image-upload-59f8cff499-jjh5v\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.330044 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/4c4b9c35-bd06-455d-a579-a7e0e2532c91-amphora-image\") pod \"octavia-image-upload-59f8cff499-jjh5v\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.330255 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c4b9c35-bd06-455d-a579-a7e0e2532c91-httpd-config\") pod \"octavia-image-upload-59f8cff499-jjh5v\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.330965 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/4c4b9c35-bd06-455d-a579-a7e0e2532c91-amphora-image\") pod \"octavia-image-upload-59f8cff499-jjh5v\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.355829 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c4b9c35-bd06-455d-a579-a7e0e2532c91-httpd-config\") pod \"octavia-image-upload-59f8cff499-jjh5v\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.537744 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:18 crc kubenswrapper[4909]: I1126 08:35:18.569491 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-wzr7s"] Nov 26 08:35:19 crc kubenswrapper[4909]: I1126 08:35:19.409337 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-wzr7s" event={"ID":"b481a95e-cbfb-446b-9229-3dff4536d732","Type":"ContainerStarted","Data":"08d193a61e02aec1de4a9c85fc64ef47af238d2fbda6c156a8adb5cbc0ab683a"} Nov 26 08:35:19 crc kubenswrapper[4909]: I1126 08:35:19.727484 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-jjh5v"] Nov 26 08:35:19 crc kubenswrapper[4909]: W1126 08:35:19.905413 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c4b9c35_bd06_455d_a579_a7e0e2532c91.slice/crio-e2e563709652e6a8b23651f292ed5325176d88a0c290d04283206afee36d4ab5 WatchSource:0}: Error finding container e2e563709652e6a8b23651f292ed5325176d88a0c290d04283206afee36d4ab5: Status 404 returned error can't find the container with id e2e563709652e6a8b23651f292ed5325176d88a0c290d04283206afee36d4ab5 Nov 26 08:35:20 crc kubenswrapper[4909]: I1126 08:35:20.420739 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" event={"ID":"4c4b9c35-bd06-455d-a579-a7e0e2532c91","Type":"ContainerStarted","Data":"e2e563709652e6a8b23651f292ed5325176d88a0c290d04283206afee36d4ab5"} Nov 26 08:35:21 crc kubenswrapper[4909]: I1126 08:35:21.433233 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-wzr7s" event={"ID":"b481a95e-cbfb-446b-9229-3dff4536d732","Type":"ContainerStarted","Data":"3fc9ce9b95dc13131001b0445315d0fc2f622fc2eeda3fac93ebf844ed64a8f4"} Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.377853 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-7rb9k"] Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.380361 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.382844 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.403584 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-7rb9k"] Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.457708 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.457773 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-combined-ca-bundle\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.457820 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-scripts\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.457925 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data-merged\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.482311 4909 generic.go:334] "Generic (PLEG): container finished" podID="b481a95e-cbfb-446b-9229-3dff4536d732" containerID="3fc9ce9b95dc13131001b0445315d0fc2f622fc2eeda3fac93ebf844ed64a8f4" exitCode=0 Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.482385 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-wzr7s" event={"ID":"b481a95e-cbfb-446b-9229-3dff4536d732","Type":"ContainerDied","Data":"3fc9ce9b95dc13131001b0445315d0fc2f622fc2eeda3fac93ebf844ed64a8f4"} Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.559629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-combined-ca-bundle\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.559753 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-scripts\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.559827 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data-merged\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.559924 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.561328 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data-merged\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.565495 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-scripts\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.565672 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-combined-ca-bundle\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.566010 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data\") pod \"octavia-db-sync-7rb9k\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:24 crc kubenswrapper[4909]: I1126 08:35:24.700038 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:25 crc kubenswrapper[4909]: I1126 08:35:25.237493 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-7rb9k"] Nov 26 08:35:28 crc kubenswrapper[4909]: I1126 08:35:28.539255 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-7rb9k" event={"ID":"9ca41f09-57ae-4c80-b0eb-bccd0c02a141","Type":"ContainerStarted","Data":"92d9fa7ca2393494c0cb71f0794fb5fdcc98c97ca9fddb1bb8a3493f06dea0d6"} Nov 26 08:35:29 crc kubenswrapper[4909]: I1126 08:35:29.432193 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:35:29 crc kubenswrapper[4909]: I1126 08:35:29.563035 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-578cc99bcb-vf9qr" Nov 26 08:35:30 crc kubenswrapper[4909]: I1126 08:35:30.499669 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:35:30 crc kubenswrapper[4909]: E1126 08:35:30.499903 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:35:30 crc kubenswrapper[4909]: I1126 08:35:30.566349 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" event={"ID":"4c4b9c35-bd06-455d-a579-a7e0e2532c91","Type":"ContainerStarted","Data":"7a35e0110ea2169f2ab021136db23e0ec073a3abe79d60e4681f35ff8f8aad1d"} Nov 26 08:35:30 crc kubenswrapper[4909]: I1126 08:35:30.571660 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-wzr7s" event={"ID":"b481a95e-cbfb-446b-9229-3dff4536d732","Type":"ContainerStarted","Data":"8c7bd10b278de1b833413694c7711d25116522a56f1603a0e86e6b0f76ca21cb"} Nov 26 08:35:30 crc kubenswrapper[4909]: I1126 08:35:30.572447 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:30 crc kubenswrapper[4909]: I1126 08:35:30.575376 4909 generic.go:334] "Generic (PLEG): container finished" podID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" containerID="c254ad81c035c1a22659c89c7aca9b511637ccee7728d3cdabc5f6eb141ddc4f" exitCode=0 Nov 26 08:35:30 crc kubenswrapper[4909]: I1126 08:35:30.575420 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-7rb9k" event={"ID":"9ca41f09-57ae-4c80-b0eb-bccd0c02a141","Type":"ContainerDied","Data":"c254ad81c035c1a22659c89c7aca9b511637ccee7728d3cdabc5f6eb141ddc4f"} Nov 26 08:35:30 crc kubenswrapper[4909]: I1126 08:35:30.653724 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-wzr7s" podStartSLOduration=2.85669133 podStartE2EDuration="13.653702739s" podCreationTimestamp="2025-11-26 08:35:17 +0000 UTC" firstStartedPulling="2025-11-26 08:35:18.577618476 +0000 UTC m=+5690.723829642" lastFinishedPulling="2025-11-26 08:35:29.374629885 +0000 UTC m=+5701.520841051" observedRunningTime="2025-11-26 08:35:30.642970697 +0000 UTC m=+5702.789181873" watchObservedRunningTime="2025-11-26 08:35:30.653702739 +0000 UTC m=+5702.799913915" Nov 26 08:35:32 crc kubenswrapper[4909]: I1126 08:35:32.594257 4909 generic.go:334] "Generic (PLEG): container finished" podID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerID="7a35e0110ea2169f2ab021136db23e0ec073a3abe79d60e4681f35ff8f8aad1d" exitCode=0 Nov 26 08:35:32 crc kubenswrapper[4909]: I1126 08:35:32.594344 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" event={"ID":"4c4b9c35-bd06-455d-a579-a7e0e2532c91","Type":"ContainerDied","Data":"7a35e0110ea2169f2ab021136db23e0ec073a3abe79d60e4681f35ff8f8aad1d"} Nov 26 08:35:33 crc kubenswrapper[4909]: I1126 08:35:33.611303 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-7rb9k" event={"ID":"9ca41f09-57ae-4c80-b0eb-bccd0c02a141","Type":"ContainerStarted","Data":"3108bf2120b949b6c88fc56cac72838ec3c1562cb9b0273d10751296ec6f439f"} Nov 26 08:35:33 crc kubenswrapper[4909]: I1126 08:35:33.661323 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-7rb9k" podStartSLOduration=9.661304691 podStartE2EDuration="9.661304691s" podCreationTimestamp="2025-11-26 08:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:35:33.649762446 +0000 UTC m=+5705.795973612" watchObservedRunningTime="2025-11-26 08:35:33.661304691 +0000 UTC m=+5705.807515857" Nov 26 08:35:34 crc kubenswrapper[4909]: I1126 08:35:34.621319 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" event={"ID":"4c4b9c35-bd06-455d-a579-a7e0e2532c91","Type":"ContainerStarted","Data":"98d530fbf9ed1fd8b85359b1d6bf2ec7c5e5308499f195c35b23bf73a63a34c1"} Nov 26 08:35:34 crc kubenswrapper[4909]: I1126 08:35:34.647881 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" podStartSLOduration=2.552600141 podStartE2EDuration="16.647860469s" podCreationTimestamp="2025-11-26 08:35:18 +0000 UTC" firstStartedPulling="2025-11-26 08:35:19.907201029 +0000 UTC m=+5692.053412195" lastFinishedPulling="2025-11-26 08:35:34.002461357 +0000 UTC m=+5706.148672523" observedRunningTime="2025-11-26 08:35:34.639310625 +0000 UTC m=+5706.785521801" watchObservedRunningTime="2025-11-26 08:35:34.647860469 +0000 UTC m=+5706.794071655" Nov 26 08:35:35 crc kubenswrapper[4909]: I1126 08:35:35.635254 4909 generic.go:334] "Generic (PLEG): container finished" podID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" containerID="3108bf2120b949b6c88fc56cac72838ec3c1562cb9b0273d10751296ec6f439f" exitCode=0 Nov 26 08:35:35 crc kubenswrapper[4909]: I1126 08:35:35.635356 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-7rb9k" event={"ID":"9ca41f09-57ae-4c80-b0eb-bccd0c02a141","Type":"ContainerDied","Data":"3108bf2120b949b6c88fc56cac72838ec3c1562cb9b0273d10751296ec6f439f"} Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.040290 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.164616 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data-merged\") pod \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.164969 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-scripts\") pod \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.165041 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-combined-ca-bundle\") pod \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.165120 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data\") pod \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\" (UID: \"9ca41f09-57ae-4c80-b0eb-bccd0c02a141\") " Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.170320 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-scripts" (OuterVolumeSpecName: "scripts") pod "9ca41f09-57ae-4c80-b0eb-bccd0c02a141" (UID: "9ca41f09-57ae-4c80-b0eb-bccd0c02a141"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.170338 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data" (OuterVolumeSpecName: "config-data") pod "9ca41f09-57ae-4c80-b0eb-bccd0c02a141" (UID: "9ca41f09-57ae-4c80-b0eb-bccd0c02a141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.187202 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "9ca41f09-57ae-4c80-b0eb-bccd0c02a141" (UID: "9ca41f09-57ae-4c80-b0eb-bccd0c02a141"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.192583 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ca41f09-57ae-4c80-b0eb-bccd0c02a141" (UID: "9ca41f09-57ae-4c80-b0eb-bccd0c02a141"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.267648 4909 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data-merged\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.267683 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.267692 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.267700 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca41f09-57ae-4c80-b0eb-bccd0c02a141-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.653124 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-7rb9k" event={"ID":"9ca41f09-57ae-4c80-b0eb-bccd0c02a141","Type":"ContainerDied","Data":"92d9fa7ca2393494c0cb71f0794fb5fdcc98c97ca9fddb1bb8a3493f06dea0d6"} Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.653160 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d9fa7ca2393494c0cb71f0794fb5fdcc98c97ca9fddb1bb8a3493f06dea0d6" Nov 26 08:35:37 crc kubenswrapper[4909]: I1126 08:35:37.653200 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-7rb9k" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.034552 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znbzj"] Nov 26 08:35:44 crc kubenswrapper[4909]: E1126 08:35:44.035743 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" containerName="octavia-db-sync" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.035762 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" containerName="octavia-db-sync" Nov 26 08:35:44 crc kubenswrapper[4909]: E1126 08:35:44.035802 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" containerName="init" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.035811 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" containerName="init" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.036056 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" containerName="octavia-db-sync" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.037870 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.062470 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znbzj"] Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.096617 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-catalog-content\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.096665 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-utilities\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.096732 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp49z\" (UniqueName: \"kubernetes.io/projected/ae11d649-4901-46ca-aba6-34d58b41dbd5-kube-api-access-bp49z\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.198310 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-catalog-content\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.198369 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-utilities\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.198429 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp49z\" (UniqueName: \"kubernetes.io/projected/ae11d649-4901-46ca-aba6-34d58b41dbd5-kube-api-access-bp49z\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.198810 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-catalog-content\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.198886 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-utilities\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.229692 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp49z\" (UniqueName: \"kubernetes.io/projected/ae11d649-4901-46ca-aba6-34d58b41dbd5-kube-api-access-bp49z\") pod \"redhat-marketplace-znbzj\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.362332 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:44 crc kubenswrapper[4909]: I1126 08:35:44.878497 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znbzj"] Nov 26 08:35:44 crc kubenswrapper[4909]: W1126 08:35:44.882232 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae11d649_4901_46ca_aba6_34d58b41dbd5.slice/crio-fe7fa6be73be7c4e05a0c12e156bb69bac35ddb71f866d20fa2fe66fbf855d69 WatchSource:0}: Error finding container fe7fa6be73be7c4e05a0c12e156bb69bac35ddb71f866d20fa2fe66fbf855d69: Status 404 returned error can't find the container with id fe7fa6be73be7c4e05a0c12e156bb69bac35ddb71f866d20fa2fe66fbf855d69 Nov 26 08:35:45 crc kubenswrapper[4909]: I1126 08:35:45.498619 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:35:45 crc kubenswrapper[4909]: E1126 08:35:45.499157 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:35:45 crc kubenswrapper[4909]: I1126 08:35:45.741559 4909 generic.go:334] "Generic (PLEG): container finished" podID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerID="48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550" exitCode=0 Nov 26 08:35:45 crc kubenswrapper[4909]: I1126 08:35:45.741648 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znbzj" event={"ID":"ae11d649-4901-46ca-aba6-34d58b41dbd5","Type":"ContainerDied","Data":"48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550"} Nov 26 08:35:45 crc kubenswrapper[4909]: I1126 08:35:45.741688 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znbzj" event={"ID":"ae11d649-4901-46ca-aba6-34d58b41dbd5","Type":"ContainerStarted","Data":"fe7fa6be73be7c4e05a0c12e156bb69bac35ddb71f866d20fa2fe66fbf855d69"} Nov 26 08:35:46 crc kubenswrapper[4909]: I1126 08:35:46.754667 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znbzj" event={"ID":"ae11d649-4901-46ca-aba6-34d58b41dbd5","Type":"ContainerStarted","Data":"5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6"} Nov 26 08:35:47 crc kubenswrapper[4909]: I1126 08:35:47.768020 4909 generic.go:334] "Generic (PLEG): container finished" podID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerID="5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6" exitCode=0 Nov 26 08:35:47 crc kubenswrapper[4909]: I1126 08:35:47.768108 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znbzj" event={"ID":"ae11d649-4901-46ca-aba6-34d58b41dbd5","Type":"ContainerDied","Data":"5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6"} Nov 26 08:35:47 crc kubenswrapper[4909]: I1126 08:35:47.984558 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-wzr7s" Nov 26 08:35:48 crc kubenswrapper[4909]: I1126 08:35:48.782222 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znbzj" event={"ID":"ae11d649-4901-46ca-aba6-34d58b41dbd5","Type":"ContainerStarted","Data":"50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c"} Nov 26 08:35:48 crc kubenswrapper[4909]: I1126 08:35:48.825570 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znbzj" podStartSLOduration=2.302309339 podStartE2EDuration="4.825542086s" podCreationTimestamp="2025-11-26 08:35:44 +0000 UTC" firstStartedPulling="2025-11-26 08:35:45.743624565 +0000 UTC m=+5717.889835731" lastFinishedPulling="2025-11-26 08:35:48.266857312 +0000 UTC m=+5720.413068478" observedRunningTime="2025-11-26 08:35:48.809000604 +0000 UTC m=+5720.955211810" watchObservedRunningTime="2025-11-26 08:35:48.825542086 +0000 UTC m=+5720.971753292" Nov 26 08:35:53 crc kubenswrapper[4909]: I1126 08:35:53.048259 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-88nfn"] Nov 26 08:35:53 crc kubenswrapper[4909]: I1126 08:35:53.062811 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-88nfn"] Nov 26 08:35:54 crc kubenswrapper[4909]: I1126 08:35:54.363007 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:54 crc kubenswrapper[4909]: I1126 08:35:54.363461 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:54 crc kubenswrapper[4909]: I1126 08:35:54.446238 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:54 crc kubenswrapper[4909]: I1126 08:35:54.512227 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb6fd40-d61c-4543-b3e3-4fb5507994eb" path="/var/lib/kubelet/pods/0bb6fd40-d61c-4543-b3e3-4fb5507994eb/volumes" Nov 26 08:35:54 crc kubenswrapper[4909]: I1126 08:35:54.890525 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:54 crc kubenswrapper[4909]: I1126 08:35:54.938899 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znbzj"] Nov 26 08:35:56 crc kubenswrapper[4909]: I1126 08:35:56.865767 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-znbzj" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="registry-server" containerID="cri-o://50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c" gracePeriod=2 Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.473443 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.537453 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-utilities\") pod \"ae11d649-4901-46ca-aba6-34d58b41dbd5\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.537638 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp49z\" (UniqueName: \"kubernetes.io/projected/ae11d649-4901-46ca-aba6-34d58b41dbd5-kube-api-access-bp49z\") pod \"ae11d649-4901-46ca-aba6-34d58b41dbd5\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.537734 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-catalog-content\") pod \"ae11d649-4901-46ca-aba6-34d58b41dbd5\" (UID: \"ae11d649-4901-46ca-aba6-34d58b41dbd5\") " Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.538413 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-utilities" (OuterVolumeSpecName: "utilities") pod "ae11d649-4901-46ca-aba6-34d58b41dbd5" (UID: "ae11d649-4901-46ca-aba6-34d58b41dbd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.552759 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae11d649-4901-46ca-aba6-34d58b41dbd5-kube-api-access-bp49z" (OuterVolumeSpecName: "kube-api-access-bp49z") pod "ae11d649-4901-46ca-aba6-34d58b41dbd5" (UID: "ae11d649-4901-46ca-aba6-34d58b41dbd5"). InnerVolumeSpecName "kube-api-access-bp49z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.557434 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae11d649-4901-46ca-aba6-34d58b41dbd5" (UID: "ae11d649-4901-46ca-aba6-34d58b41dbd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.640267 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp49z\" (UniqueName: \"kubernetes.io/projected/ae11d649-4901-46ca-aba6-34d58b41dbd5-kube-api-access-bp49z\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.640316 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.640328 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae11d649-4901-46ca-aba6-34d58b41dbd5-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.877314 4909 generic.go:334] "Generic (PLEG): container finished" podID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerID="50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c" exitCode=0 Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.877350 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znbzj" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.877360 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znbzj" event={"ID":"ae11d649-4901-46ca-aba6-34d58b41dbd5","Type":"ContainerDied","Data":"50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c"} Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.877397 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znbzj" event={"ID":"ae11d649-4901-46ca-aba6-34d58b41dbd5","Type":"ContainerDied","Data":"fe7fa6be73be7c4e05a0c12e156bb69bac35ddb71f866d20fa2fe66fbf855d69"} Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.877418 4909 scope.go:117] "RemoveContainer" containerID="50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.930741 4909 scope.go:117] "RemoveContainer" containerID="5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.937039 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znbzj"] Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.949040 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-znbzj"] Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.960808 4909 scope.go:117] "RemoveContainer" containerID="48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.998419 4909 scope.go:117] "RemoveContainer" containerID="50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c" Nov 26 08:35:57 crc kubenswrapper[4909]: E1126 08:35:57.998876 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c\": container with ID starting with 50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c not found: ID does not exist" containerID="50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.998908 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c"} err="failed to get container status \"50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c\": rpc error: code = NotFound desc = could not find container \"50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c\": container with ID starting with 50fe91301ad463a345a6d628c9cfd86ef2e1055d7e4ebf6f37621bd7817cd18c not found: ID does not exist" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.998929 4909 scope.go:117] "RemoveContainer" containerID="5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6" Nov 26 08:35:57 crc kubenswrapper[4909]: E1126 08:35:57.999176 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6\": container with ID starting with 5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6 not found: ID does not exist" containerID="5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.999201 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6"} err="failed to get container status \"5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6\": rpc error: code = NotFound desc = could not find container \"5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6\": container with ID starting with 5f82d7a84019fd265dfab52b4393f10cd4038d3cdd7ea82b1fbd5dd483457bb6 not found: ID does not exist" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.999215 4909 scope.go:117] "RemoveContainer" containerID="48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550" Nov 26 08:35:57 crc kubenswrapper[4909]: E1126 08:35:57.999469 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550\": container with ID starting with 48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550 not found: ID does not exist" containerID="48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550" Nov 26 08:35:57 crc kubenswrapper[4909]: I1126 08:35:57.999491 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550"} err="failed to get container status \"48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550\": rpc error: code = NotFound desc = could not find container \"48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550\": container with ID starting with 48bc6fe9c16219f731b58399c7b43cb042e4179193bd548ba877070dd714d550 not found: ID does not exist" Nov 26 08:35:58 crc kubenswrapper[4909]: I1126 08:35:58.510440 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" path="/var/lib/kubelet/pods/ae11d649-4901-46ca-aba6-34d58b41dbd5/volumes" Nov 26 08:35:58 crc kubenswrapper[4909]: I1126 08:35:58.736042 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-jjh5v"] Nov 26 08:35:58 crc kubenswrapper[4909]: I1126 08:35:58.736240 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" podUID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerName="octavia-amphora-httpd" containerID="cri-o://98d530fbf9ed1fd8b85359b1d6bf2ec7c5e5308499f195c35b23bf73a63a34c1" gracePeriod=30 Nov 26 08:35:58 crc kubenswrapper[4909]: I1126 08:35:58.890686 4909 generic.go:334] "Generic (PLEG): container finished" podID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerID="98d530fbf9ed1fd8b85359b1d6bf2ec7c5e5308499f195c35b23bf73a63a34c1" exitCode=0 Nov 26 08:35:58 crc kubenswrapper[4909]: I1126 08:35:58.890726 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" event={"ID":"4c4b9c35-bd06-455d-a579-a7e0e2532c91","Type":"ContainerDied","Data":"98d530fbf9ed1fd8b85359b1d6bf2ec7c5e5308499f195c35b23bf73a63a34c1"} Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.244798 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.273388 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c4b9c35-bd06-455d-a579-a7e0e2532c91-httpd-config\") pod \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.273537 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/4c4b9c35-bd06-455d-a579-a7e0e2532c91-amphora-image\") pod \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\" (UID: \"4c4b9c35-bd06-455d-a579-a7e0e2532c91\") " Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.299735 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c4b9c35-bd06-455d-a579-a7e0e2532c91-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4c4b9c35-bd06-455d-a579-a7e0e2532c91" (UID: "4c4b9c35-bd06-455d-a579-a7e0e2532c91"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.311335 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4b9c35-bd06-455d-a579-a7e0e2532c91-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "4c4b9c35-bd06-455d-a579-a7e0e2532c91" (UID: "4c4b9c35-bd06-455d-a579-a7e0e2532c91"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.379999 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c4b9c35-bd06-455d-a579-a7e0e2532c91-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.380032 4909 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/4c4b9c35-bd06-455d-a579-a7e0e2532c91-amphora-image\") on node \"crc\" DevicePath \"\"" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.910627 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" event={"ID":"4c4b9c35-bd06-455d-a579-a7e0e2532c91","Type":"ContainerDied","Data":"e2e563709652e6a8b23651f292ed5325176d88a0c290d04283206afee36d4ab5"} Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.910942 4909 scope.go:117] "RemoveContainer" containerID="98d530fbf9ed1fd8b85359b1d6bf2ec7c5e5308499f195c35b23bf73a63a34c1" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.910765 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-jjh5v" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.956230 4909 scope.go:117] "RemoveContainer" containerID="7a35e0110ea2169f2ab021136db23e0ec073a3abe79d60e4681f35ff8f8aad1d" Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.965577 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-jjh5v"] Nov 26 08:35:59 crc kubenswrapper[4909]: I1126 08:35:59.975278 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-jjh5v"] Nov 26 08:36:00 crc kubenswrapper[4909]: I1126 08:36:00.503577 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:36:00 crc kubenswrapper[4909]: E1126 08:36:00.504088 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:36:00 crc kubenswrapper[4909]: I1126 08:36:00.511340 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" path="/var/lib/kubelet/pods/4c4b9c35-bd06-455d-a579-a7e0e2532c91/volumes" Nov 26 08:36:03 crc kubenswrapper[4909]: I1126 08:36:03.045036 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d922-account-create-hcgjn"] Nov 26 08:36:03 crc kubenswrapper[4909]: I1126 08:36:03.058853 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d922-account-create-hcgjn"] Nov 26 08:36:04 crc kubenswrapper[4909]: I1126 08:36:04.510248 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d67eccc-f207-4b2f-921c-dbf28a5438a9" path="/var/lib/kubelet/pods/5d67eccc-f207-4b2f-921c-dbf28a5438a9/volumes" Nov 26 08:36:09 crc kubenswrapper[4909]: I1126 08:36:09.040776 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g6wdv"] Nov 26 08:36:09 crc kubenswrapper[4909]: I1126 08:36:09.049239 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g6wdv"] Nov 26 08:36:10 crc kubenswrapper[4909]: I1126 08:36:10.514982 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a1e5822-f048-404a-ae86-c9ad6248f715" path="/var/lib/kubelet/pods/0a1e5822-f048-404a-ae86-c9ad6248f715/volumes" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.279126 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8lj46"] Nov 26 08:36:13 crc kubenswrapper[4909]: E1126 08:36:13.279967 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="extract-utilities" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.280223 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="extract-utilities" Nov 26 08:36:13 crc kubenswrapper[4909]: E1126 08:36:13.280252 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerName="octavia-amphora-httpd" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.280260 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerName="octavia-amphora-httpd" Nov 26 08:36:13 crc kubenswrapper[4909]: E1126 08:36:13.280282 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerName="init" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.280291 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerName="init" Nov 26 08:36:13 crc kubenswrapper[4909]: E1126 08:36:13.280322 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="registry-server" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.280330 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="registry-server" Nov 26 08:36:13 crc kubenswrapper[4909]: E1126 08:36:13.280353 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="extract-content" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.280361 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="extract-content" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.280628 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4b9c35-bd06-455d-a579-a7e0e2532c91" containerName="octavia-amphora-httpd" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.280645 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae11d649-4901-46ca-aba6-34d58b41dbd5" containerName="registry-server" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.282510 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.294500 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lj46"] Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.342106 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-catalog-content\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.342215 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjv7l\" (UniqueName: \"kubernetes.io/projected/c35c2088-3611-40c6-9052-65e238c23064-kube-api-access-kjv7l\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.342297 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-utilities\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.444346 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-catalog-content\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.444483 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjv7l\" (UniqueName: \"kubernetes.io/projected/c35c2088-3611-40c6-9052-65e238c23064-kube-api-access-kjv7l\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.444661 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-utilities\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.445011 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-catalog-content\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.445344 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-utilities\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.468691 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjv7l\" (UniqueName: \"kubernetes.io/projected/c35c2088-3611-40c6-9052-65e238c23064-kube-api-access-kjv7l\") pod \"community-operators-8lj46\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:13 crc kubenswrapper[4909]: I1126 08:36:13.635022 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:14 crc kubenswrapper[4909]: W1126 08:36:14.118943 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35c2088_3611_40c6_9052_65e238c23064.slice/crio-d4b2f4d6649f49b6ed6c53711290d05b286f05d8633bae883cb59f53be405628 WatchSource:0}: Error finding container d4b2f4d6649f49b6ed6c53711290d05b286f05d8633bae883cb59f53be405628: Status 404 returned error can't find the container with id d4b2f4d6649f49b6ed6c53711290d05b286f05d8633bae883cb59f53be405628 Nov 26 08:36:14 crc kubenswrapper[4909]: I1126 08:36:14.121194 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lj46"] Nov 26 08:36:14 crc kubenswrapper[4909]: I1126 08:36:14.499357 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:36:14 crc kubenswrapper[4909]: E1126 08:36:14.499569 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:36:15 crc kubenswrapper[4909]: I1126 08:36:15.092693 4909 generic.go:334] "Generic (PLEG): container finished" podID="c35c2088-3611-40c6-9052-65e238c23064" containerID="79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb" exitCode=0 Nov 26 08:36:15 crc kubenswrapper[4909]: I1126 08:36:15.092842 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lj46" event={"ID":"c35c2088-3611-40c6-9052-65e238c23064","Type":"ContainerDied","Data":"79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb"} Nov 26 08:36:15 crc kubenswrapper[4909]: I1126 08:36:15.093244 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lj46" event={"ID":"c35c2088-3611-40c6-9052-65e238c23064","Type":"ContainerStarted","Data":"d4b2f4d6649f49b6ed6c53711290d05b286f05d8633bae883cb59f53be405628"} Nov 26 08:36:16 crc kubenswrapper[4909]: I1126 08:36:16.108695 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lj46" event={"ID":"c35c2088-3611-40c6-9052-65e238c23064","Type":"ContainerStarted","Data":"9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a"} Nov 26 08:36:17 crc kubenswrapper[4909]: I1126 08:36:17.119656 4909 generic.go:334] "Generic (PLEG): container finished" podID="c35c2088-3611-40c6-9052-65e238c23064" containerID="9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a" exitCode=0 Nov 26 08:36:17 crc kubenswrapper[4909]: I1126 08:36:17.119887 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lj46" event={"ID":"c35c2088-3611-40c6-9052-65e238c23064","Type":"ContainerDied","Data":"9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a"} Nov 26 08:36:18 crc kubenswrapper[4909]: I1126 08:36:18.130368 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lj46" event={"ID":"c35c2088-3611-40c6-9052-65e238c23064","Type":"ContainerStarted","Data":"9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8"} Nov 26 08:36:18 crc kubenswrapper[4909]: I1126 08:36:18.152974 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8lj46" podStartSLOduration=2.702105683 podStartE2EDuration="5.152955882s" podCreationTimestamp="2025-11-26 08:36:13 +0000 UTC" firstStartedPulling="2025-11-26 08:36:15.096668651 +0000 UTC m=+5747.242879827" lastFinishedPulling="2025-11-26 08:36:17.54751885 +0000 UTC m=+5749.693730026" observedRunningTime="2025-11-26 08:36:18.145195929 +0000 UTC m=+5750.291407115" watchObservedRunningTime="2025-11-26 08:36:18.152955882 +0000 UTC m=+5750.299167048" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.461763 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-jtsrx"] Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.463997 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.469313 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.469468 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.469479 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.520722 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-jtsrx"] Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.641072 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/427f7e74-637f-4c5b-be23-132aaf076de2-config-data-merged\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.641118 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-combined-ca-bundle\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.641152 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/427f7e74-637f-4c5b-be23-132aaf076de2-hm-ports\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.641368 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-scripts\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.641409 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-config-data\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.641499 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-amphora-certs\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.743838 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-scripts\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.743894 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-config-data\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.743962 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-amphora-certs\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.744060 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/427f7e74-637f-4c5b-be23-132aaf076de2-config-data-merged\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.744091 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-combined-ca-bundle\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.744117 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/427f7e74-637f-4c5b-be23-132aaf076de2-hm-ports\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.744819 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/427f7e74-637f-4c5b-be23-132aaf076de2-config-data-merged\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.745340 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/427f7e74-637f-4c5b-be23-132aaf076de2-hm-ports\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.750216 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-combined-ca-bundle\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.750622 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-scripts\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.751262 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-config-data\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.754229 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/427f7e74-637f-4c5b-be23-132aaf076de2-amphora-certs\") pod \"octavia-healthmanager-jtsrx\" (UID: \"427f7e74-637f-4c5b-be23-132aaf076de2\") " pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:22 crc kubenswrapper[4909]: I1126 08:36:22.793237 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.464645 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-jtsrx"] Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.638886 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.638939 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.691464 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.764033 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-rbx7b"] Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.766351 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.772210 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-rbx7b"] Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.772532 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.772744 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.873328 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-amphora-certs\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.873396 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-config-data\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.873425 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-combined-ca-bundle\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.873519 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-scripts\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.873565 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2efa071d-456a-4c34-aa73-1da5e9efd3f3-config-data-merged\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.873613 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2efa071d-456a-4c34-aa73-1da5e9efd3f3-hm-ports\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.974932 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-scripts\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.974985 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2efa071d-456a-4c34-aa73-1da5e9efd3f3-config-data-merged\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.975015 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2efa071d-456a-4c34-aa73-1da5e9efd3f3-hm-ports\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.975075 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-amphora-certs\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.975105 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-config-data\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.975125 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-combined-ca-bundle\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.977239 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2efa071d-456a-4c34-aa73-1da5e9efd3f3-hm-ports\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.977882 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2efa071d-456a-4c34-aa73-1da5e9efd3f3-config-data-merged\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.981408 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-amphora-certs\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.982875 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-combined-ca-bundle\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.984828 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-config-data\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:23 crc kubenswrapper[4909]: I1126 08:36:23.995819 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efa071d-456a-4c34-aa73-1da5e9efd3f3-scripts\") pod \"octavia-housekeeping-rbx7b\" (UID: \"2efa071d-456a-4c34-aa73-1da5e9efd3f3\") " pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:24 crc kubenswrapper[4909]: I1126 08:36:24.106385 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:24 crc kubenswrapper[4909]: I1126 08:36:24.182423 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jtsrx" event={"ID":"427f7e74-637f-4c5b-be23-132aaf076de2","Type":"ContainerStarted","Data":"82c0c97b08a944fa7fac807b44c0ea785603a0b9da043ac980bb489177441227"} Nov 26 08:36:24 crc kubenswrapper[4909]: I1126 08:36:24.182484 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jtsrx" event={"ID":"427f7e74-637f-4c5b-be23-132aaf076de2","Type":"ContainerStarted","Data":"a74a2c3c1c33b46801b691bd334e37fe42b58f98220f0ab6aa6c2c311416854d"} Nov 26 08:36:24 crc kubenswrapper[4909]: I1126 08:36:24.262542 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:24 crc kubenswrapper[4909]: I1126 08:36:24.318473 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lj46"] Nov 26 08:36:24 crc kubenswrapper[4909]: I1126 08:36:24.693987 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-rbx7b"] Nov 26 08:36:24 crc kubenswrapper[4909]: W1126 08:36:24.713727 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2efa071d_456a_4c34_aa73_1da5e9efd3f3.slice/crio-58c5183d7394ea8dca098cd3cf4755cb6a2a504072fe6742ed88924052f94a31 WatchSource:0}: Error finding container 58c5183d7394ea8dca098cd3cf4755cb6a2a504072fe6742ed88924052f94a31: Status 404 returned error can't find the container with id 58c5183d7394ea8dca098cd3cf4755cb6a2a504072fe6742ed88924052f94a31 Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.212347 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-rbx7b" event={"ID":"2efa071d-456a-4c34-aa73-1da5e9efd3f3","Type":"ContainerStarted","Data":"58c5183d7394ea8dca098cd3cf4755cb6a2a504072fe6742ed88924052f94a31"} Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.714466 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-58r8d"] Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.717347 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.719514 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.720145 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.724200 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-58r8d"] Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.818042 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-scripts\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.818119 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28bc7581-0e69-42a3-b086-a83b1e730ee1-config-data-merged\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.818226 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/28bc7581-0e69-42a3-b086-a83b1e730ee1-hm-ports\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.818258 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-combined-ca-bundle\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.818298 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-amphora-certs\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.818326 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-config-data\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.920130 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-scripts\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.921915 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28bc7581-0e69-42a3-b086-a83b1e730ee1-config-data-merged\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.921961 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/28bc7581-0e69-42a3-b086-a83b1e730ee1-hm-ports\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.922005 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-combined-ca-bundle\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.922065 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-amphora-certs\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.923447 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28bc7581-0e69-42a3-b086-a83b1e730ee1-config-data-merged\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.923842 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-config-data\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.926468 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-scripts\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.926929 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-combined-ca-bundle\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.927040 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-amphora-certs\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.929111 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/28bc7581-0e69-42a3-b086-a83b1e730ee1-hm-ports\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:25 crc kubenswrapper[4909]: I1126 08:36:25.932753 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc7581-0e69-42a3-b086-a83b1e730ee1-config-data\") pod \"octavia-worker-58r8d\" (UID: \"28bc7581-0e69-42a3-b086-a83b1e730ee1\") " pod="openstack/octavia-worker-58r8d" Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.055392 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-58r8d" Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.228184 4909 generic.go:334] "Generic (PLEG): container finished" podID="427f7e74-637f-4c5b-be23-132aaf076de2" containerID="82c0c97b08a944fa7fac807b44c0ea785603a0b9da043ac980bb489177441227" exitCode=0 Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.228709 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8lj46" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="registry-server" containerID="cri-o://9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8" gracePeriod=2 Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.229211 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jtsrx" event={"ID":"427f7e74-637f-4c5b-be23-132aaf076de2","Type":"ContainerDied","Data":"82c0c97b08a944fa7fac807b44c0ea785603a0b9da043ac980bb489177441227"} Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.648552 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-58r8d"] Nov 26 08:36:26 crc kubenswrapper[4909]: W1126 08:36:26.668054 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28bc7581_0e69_42a3_b086_a83b1e730ee1.slice/crio-0d5c47983d3208e0acdf5ab631365e225b57b52a2dde72d758114ba56a9d63bb WatchSource:0}: Error finding container 0d5c47983d3208e0acdf5ab631365e225b57b52a2dde72d758114ba56a9d63bb: Status 404 returned error can't find the container with id 0d5c47983d3208e0acdf5ab631365e225b57b52a2dde72d758114ba56a9d63bb Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.686326 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.748969 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-utilities\") pod \"c35c2088-3611-40c6-9052-65e238c23064\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.749043 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjv7l\" (UniqueName: \"kubernetes.io/projected/c35c2088-3611-40c6-9052-65e238c23064-kube-api-access-kjv7l\") pod \"c35c2088-3611-40c6-9052-65e238c23064\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.749136 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-catalog-content\") pod \"c35c2088-3611-40c6-9052-65e238c23064\" (UID: \"c35c2088-3611-40c6-9052-65e238c23064\") " Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.766874 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35c2088-3611-40c6-9052-65e238c23064-kube-api-access-kjv7l" (OuterVolumeSpecName: "kube-api-access-kjv7l") pod "c35c2088-3611-40c6-9052-65e238c23064" (UID: "c35c2088-3611-40c6-9052-65e238c23064"). InnerVolumeSpecName "kube-api-access-kjv7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.767630 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-utilities" (OuterVolumeSpecName: "utilities") pod "c35c2088-3611-40c6-9052-65e238c23064" (UID: "c35c2088-3611-40c6-9052-65e238c23064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.856417 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:36:26 crc kubenswrapper[4909]: I1126 08:36:26.856456 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjv7l\" (UniqueName: \"kubernetes.io/projected/c35c2088-3611-40c6-9052-65e238c23064-kube-api-access-kjv7l\") on node \"crc\" DevicePath \"\"" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.239391 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-58r8d" event={"ID":"28bc7581-0e69-42a3-b086-a83b1e730ee1","Type":"ContainerStarted","Data":"0d5c47983d3208e0acdf5ab631365e225b57b52a2dde72d758114ba56a9d63bb"} Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.241856 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-rbx7b" event={"ID":"2efa071d-456a-4c34-aa73-1da5e9efd3f3","Type":"ContainerStarted","Data":"2ffd0034b19631a0990cae52d40e9910ee69e601280c591d779e563de149065a"} Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.245081 4909 generic.go:334] "Generic (PLEG): container finished" podID="c35c2088-3611-40c6-9052-65e238c23064" containerID="9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8" exitCode=0 Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.245137 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lj46" event={"ID":"c35c2088-3611-40c6-9052-65e238c23064","Type":"ContainerDied","Data":"9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8"} Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.245159 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lj46" event={"ID":"c35c2088-3611-40c6-9052-65e238c23064","Type":"ContainerDied","Data":"d4b2f4d6649f49b6ed6c53711290d05b286f05d8633bae883cb59f53be405628"} Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.245175 4909 scope.go:117] "RemoveContainer" containerID="9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.245220 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lj46" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.248770 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jtsrx" event={"ID":"427f7e74-637f-4c5b-be23-132aaf076de2","Type":"ContainerStarted","Data":"73e484e058f0092791002443e66149cbff64050be97492a4bba3e2cc77e5b14a"} Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.249473 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.277477 4909 scope.go:117] "RemoveContainer" containerID="9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.302112 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-jtsrx" podStartSLOduration=5.302091815 podStartE2EDuration="5.302091815s" podCreationTimestamp="2025-11-26 08:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:36:27.291152737 +0000 UTC m=+5759.437363903" watchObservedRunningTime="2025-11-26 08:36:27.302091815 +0000 UTC m=+5759.448302991" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.308807 4909 scope.go:117] "RemoveContainer" containerID="79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.368156 4909 scope.go:117] "RemoveContainer" containerID="9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8" Nov 26 08:36:27 crc kubenswrapper[4909]: E1126 08:36:27.373309 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8\": container with ID starting with 9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8 not found: ID does not exist" containerID="9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.373409 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8"} err="failed to get container status \"9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8\": rpc error: code = NotFound desc = could not find container \"9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8\": container with ID starting with 9d9e5713f1b447f4edf09bb3218d8913a498207f7964de760f3ce871a2c2ede8 not found: ID does not exist" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.373451 4909 scope.go:117] "RemoveContainer" containerID="9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a" Nov 26 08:36:27 crc kubenswrapper[4909]: E1126 08:36:27.377999 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a\": container with ID starting with 9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a not found: ID does not exist" containerID="9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.378096 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a"} err="failed to get container status \"9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a\": rpc error: code = NotFound desc = could not find container \"9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a\": container with ID starting with 9c8ba4b8898ceec35a7d2c59f29ad47fcf0c622e618ec11c6fd5477041b7630a not found: ID does not exist" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.378134 4909 scope.go:117] "RemoveContainer" containerID="79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb" Nov 26 08:36:27 crc kubenswrapper[4909]: E1126 08:36:27.380010 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb\": container with ID starting with 79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb not found: ID does not exist" containerID="79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.380181 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb"} err="failed to get container status \"79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb\": rpc error: code = NotFound desc = could not find container \"79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb\": container with ID starting with 79b230d4e7e23b2690c7ab9fbd87d6faaa76b6689f5b14325be0fb73c5bbeffb not found: ID does not exist" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.598853 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c35c2088-3611-40c6-9052-65e238c23064" (UID: "c35c2088-3611-40c6-9052-65e238c23064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.694321 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35c2088-3611-40c6-9052-65e238c23064-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.893841 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lj46"] Nov 26 08:36:27 crc kubenswrapper[4909]: I1126 08:36:27.919704 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8lj46"] Nov 26 08:36:28 crc kubenswrapper[4909]: I1126 08:36:28.259302 4909 generic.go:334] "Generic (PLEG): container finished" podID="2efa071d-456a-4c34-aa73-1da5e9efd3f3" containerID="2ffd0034b19631a0990cae52d40e9910ee69e601280c591d779e563de149065a" exitCode=0 Nov 26 08:36:28 crc kubenswrapper[4909]: I1126 08:36:28.260815 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-rbx7b" event={"ID":"2efa071d-456a-4c34-aa73-1da5e9efd3f3","Type":"ContainerDied","Data":"2ffd0034b19631a0990cae52d40e9910ee69e601280c591d779e563de149065a"} Nov 26 08:36:28 crc kubenswrapper[4909]: I1126 08:36:28.514190 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35c2088-3611-40c6-9052-65e238c23064" path="/var/lib/kubelet/pods/c35c2088-3611-40c6-9052-65e238c23064/volumes" Nov 26 08:36:29 crc kubenswrapper[4909]: I1126 08:36:29.269076 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-58r8d" event={"ID":"28bc7581-0e69-42a3-b086-a83b1e730ee1","Type":"ContainerStarted","Data":"38f8608e0e299c450af610ae02987f6e815d40b41475fcfb67d7901f76d5ce72"} Nov 26 08:36:29 crc kubenswrapper[4909]: I1126 08:36:29.273601 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-rbx7b" event={"ID":"2efa071d-456a-4c34-aa73-1da5e9efd3f3","Type":"ContainerStarted","Data":"1029866d704ca5399add6aa7955716bb5c4c9015fb51eebeb197e517cc46c810"} Nov 26 08:36:29 crc kubenswrapper[4909]: I1126 08:36:29.273792 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:29 crc kubenswrapper[4909]: I1126 08:36:29.499196 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:36:29 crc kubenswrapper[4909]: E1126 08:36:29.499657 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:36:30 crc kubenswrapper[4909]: I1126 08:36:30.285254 4909 generic.go:334] "Generic (PLEG): container finished" podID="28bc7581-0e69-42a3-b086-a83b1e730ee1" containerID="38f8608e0e299c450af610ae02987f6e815d40b41475fcfb67d7901f76d5ce72" exitCode=0 Nov 26 08:36:30 crc kubenswrapper[4909]: I1126 08:36:30.285346 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-58r8d" event={"ID":"28bc7581-0e69-42a3-b086-a83b1e730ee1","Type":"ContainerDied","Data":"38f8608e0e299c450af610ae02987f6e815d40b41475fcfb67d7901f76d5ce72"} Nov 26 08:36:30 crc kubenswrapper[4909]: I1126 08:36:30.324875 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-rbx7b" podStartSLOduration=5.87157166 podStartE2EDuration="7.324852811s" podCreationTimestamp="2025-11-26 08:36:23 +0000 UTC" firstStartedPulling="2025-11-26 08:36:24.718262535 +0000 UTC m=+5756.864473701" lastFinishedPulling="2025-11-26 08:36:26.171543686 +0000 UTC m=+5758.317754852" observedRunningTime="2025-11-26 08:36:29.32242727 +0000 UTC m=+5761.468638426" watchObservedRunningTime="2025-11-26 08:36:30.324852811 +0000 UTC m=+5762.471063977" Nov 26 08:36:31 crc kubenswrapper[4909]: I1126 08:36:31.298863 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-58r8d" event={"ID":"28bc7581-0e69-42a3-b086-a83b1e730ee1","Type":"ContainerStarted","Data":"b8b232aa78e4389745e0c6667cc72bb4a960ebc5a7c3b3fa35f28231f195847b"} Nov 26 08:36:31 crc kubenswrapper[4909]: I1126 08:36:31.300400 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-58r8d" Nov 26 08:36:31 crc kubenswrapper[4909]: I1126 08:36:31.321505 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-58r8d" podStartSLOduration=4.610681872 podStartE2EDuration="6.321484334s" podCreationTimestamp="2025-11-26 08:36:25 +0000 UTC" firstStartedPulling="2025-11-26 08:36:26.729514222 +0000 UTC m=+5758.875725388" lastFinishedPulling="2025-11-26 08:36:28.440316674 +0000 UTC m=+5760.586527850" observedRunningTime="2025-11-26 08:36:31.320297871 +0000 UTC m=+5763.466509077" watchObservedRunningTime="2025-11-26 08:36:31.321484334 +0000 UTC m=+5763.467695500" Nov 26 08:36:37 crc kubenswrapper[4909]: I1126 08:36:37.832969 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-jtsrx" Nov 26 08:36:38 crc kubenswrapper[4909]: I1126 08:36:38.054413 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lgjrg"] Nov 26 08:36:38 crc kubenswrapper[4909]: I1126 08:36:38.069431 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lgjrg"] Nov 26 08:36:38 crc kubenswrapper[4909]: I1126 08:36:38.511763 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3473da28-3496-4ca7-bf00-33062d75438f" path="/var/lib/kubelet/pods/3473da28-3496-4ca7-bf00-33062d75438f/volumes" Nov 26 08:36:39 crc kubenswrapper[4909]: I1126 08:36:39.069789 4909 scope.go:117] "RemoveContainer" containerID="9c33b6d857a92f02b22caf20289afe3dddcd65b2a24aa320d70df36f1eadd2c3" Nov 26 08:36:39 crc kubenswrapper[4909]: I1126 08:36:39.099197 4909 scope.go:117] "RemoveContainer" containerID="0d337a119107f1e36ca3cb54330ad8aa552457aa2ed4a7de176794867acc79a0" Nov 26 08:36:39 crc kubenswrapper[4909]: I1126 08:36:39.134699 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-rbx7b" Nov 26 08:36:39 crc kubenswrapper[4909]: I1126 08:36:39.153161 4909 scope.go:117] "RemoveContainer" containerID="2e13ca9a6527d296d8abe25a43a72c7f7533c73dd63d17f8d4d0b4e3847efcca" Nov 26 08:36:39 crc kubenswrapper[4909]: I1126 08:36:39.201478 4909 scope.go:117] "RemoveContainer" containerID="04fabe8516c77486a4e7fb80559f8abc7ed90f75ebbe157c43c7895ab2544082" Nov 26 08:36:39 crc kubenswrapper[4909]: I1126 08:36:39.236336 4909 scope.go:117] "RemoveContainer" containerID="47b15f09d246279e96bd9311f487ed379c5e4789f6de9422a06a7dc3802a80b7" Nov 26 08:36:39 crc kubenswrapper[4909]: I1126 08:36:39.263769 4909 scope.go:117] "RemoveContainer" containerID="fc6afa17d6730e83eb9f9a629ec0d3084b061a99439285e0971933bd62519f50" Nov 26 08:36:41 crc kubenswrapper[4909]: I1126 08:36:41.096711 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-58r8d" Nov 26 08:36:43 crc kubenswrapper[4909]: I1126 08:36:43.499307 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:36:43 crc kubenswrapper[4909]: E1126 08:36:43.500096 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:36:48 crc kubenswrapper[4909]: I1126 08:36:48.032056 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a2dd-account-create-pbnxh"] Nov 26 08:36:48 crc kubenswrapper[4909]: I1126 08:36:48.043267 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a2dd-account-create-pbnxh"] Nov 26 08:36:48 crc kubenswrapper[4909]: I1126 08:36:48.512066 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6" path="/var/lib/kubelet/pods/5fd3b2e7-3f4a-4016-b530-a49e1e0a87c6/volumes" Nov 26 08:36:57 crc kubenswrapper[4909]: I1126 08:36:57.057404 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ncs2m"] Nov 26 08:36:57 crc kubenswrapper[4909]: I1126 08:36:57.074503 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ncs2m"] Nov 26 08:36:57 crc kubenswrapper[4909]: I1126 08:36:57.500191 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:36:57 crc kubenswrapper[4909]: E1126 08:36:57.501275 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:36:58 crc kubenswrapper[4909]: I1126 08:36:58.510900 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd33700-e3a8-408c-a906-8f26ee87dbb8" path="/var/lib/kubelet/pods/ddd33700-e3a8-408c-a906-8f26ee87dbb8/volumes" Nov 26 08:37:11 crc kubenswrapper[4909]: I1126 08:37:11.501152 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:37:11 crc kubenswrapper[4909]: E1126 08:37:11.502060 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.010665 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bf6b4bd6f-r65dv"] Nov 26 08:37:24 crc kubenswrapper[4909]: E1126 08:37:24.011693 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="extract-content" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.011709 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="extract-content" Nov 26 08:37:24 crc kubenswrapper[4909]: E1126 08:37:24.011753 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="registry-server" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.011761 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="registry-server" Nov 26 08:37:24 crc kubenswrapper[4909]: E1126 08:37:24.011782 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="extract-utilities" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.011790 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="extract-utilities" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.012026 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35c2088-3611-40c6-9052-65e238c23064" containerName="registry-server" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.013291 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.019564 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.019725 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jldm9" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.019564 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.020021 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.033071 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bf6b4bd6f-r65dv"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.043735 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.043999 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-log" containerID="cri-o://c869c1689a7b93f6ae3d29aea0e22914960ed5f18a2790d81d704c91127bc141" gracePeriod=30 Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.044164 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-httpd" containerID="cri-o://453fb02790be82650f06c2805b40f324ace8ce078769e65b26a7189394db812f" gracePeriod=30 Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.119335 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbps5\" (UniqueName: \"kubernetes.io/projected/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-kube-api-access-vbps5\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.119451 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-logs\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.119637 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-scripts\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.119785 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-horizon-secret-key\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.119861 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-config-data\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.123002 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.123435 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-log" containerID="cri-o://6d9a00e7734f3761e327649359f4afc22e17fb8d7b21852dc0ae28238eb666b5" gracePeriod=30 Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.124134 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-httpd" containerID="cri-o://e1842706d0f176fca62a39b9ac85ca66b8d4aafcd5907fe9f3c3b1f6f4307760" gracePeriod=30 Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.179898 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-fc574477c-hhjdg"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.182789 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.190149 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fc574477c-hhjdg"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.222089 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-scripts\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.222334 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-horizon-secret-key\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.222418 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-config-data\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.222536 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbps5\" (UniqueName: \"kubernetes.io/projected/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-kube-api-access-vbps5\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.222642 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-logs\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.223006 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-logs\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.224035 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-scripts\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.229990 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-horizon-secret-key\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.230289 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-config-data\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.245756 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbps5\" (UniqueName: \"kubernetes.io/projected/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-kube-api-access-vbps5\") pod \"horizon-5bf6b4bd6f-r65dv\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.324799 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4l2\" (UniqueName: \"kubernetes.io/projected/363f2f18-8a03-44f4-b56e-68325a86247f-kube-api-access-8n4l2\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.324885 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-config-data\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.324936 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/363f2f18-8a03-44f4-b56e-68325a86247f-horizon-secret-key\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.324972 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-scripts\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.325066 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363f2f18-8a03-44f4-b56e-68325a86247f-logs\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.345916 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.427343 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4l2\" (UniqueName: \"kubernetes.io/projected/363f2f18-8a03-44f4-b56e-68325a86247f-kube-api-access-8n4l2\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.427432 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-config-data\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.427523 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/363f2f18-8a03-44f4-b56e-68325a86247f-horizon-secret-key\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.428126 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-scripts\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.428210 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363f2f18-8a03-44f4-b56e-68325a86247f-logs\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.428328 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-scripts\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.428675 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363f2f18-8a03-44f4-b56e-68325a86247f-logs\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.428754 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-config-data\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.431621 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/363f2f18-8a03-44f4-b56e-68325a86247f-horizon-secret-key\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.446269 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4l2\" (UniqueName: \"kubernetes.io/projected/363f2f18-8a03-44f4-b56e-68325a86247f-kube-api-access-8n4l2\") pod \"horizon-fc574477c-hhjdg\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.502771 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:37:24 crc kubenswrapper[4909]: E1126 08:37:24.506301 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.515555 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.684501 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bf6b4bd6f-r65dv"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.714514 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-844fcddd89-784bm"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.716268 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.733654 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-844fcddd89-784bm"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.744942 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-config-data\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.745001 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60c3670-b331-4694-a090-36bea1a48307-logs\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.745163 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-scripts\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.745279 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f60c3670-b331-4694-a090-36bea1a48307-horizon-secret-key\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.745307 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxds\" (UniqueName: \"kubernetes.io/projected/f60c3670-b331-4694-a090-36bea1a48307-kube-api-access-xsxds\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.809725 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bf6b4bd6f-r65dv"] Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.816005 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.847192 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f60c3670-b331-4694-a090-36bea1a48307-horizon-secret-key\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.847236 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsxds\" (UniqueName: \"kubernetes.io/projected/f60c3670-b331-4694-a090-36bea1a48307-kube-api-access-xsxds\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.847283 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-config-data\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.847299 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60c3670-b331-4694-a090-36bea1a48307-logs\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.847388 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-scripts\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.848167 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-scripts\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.848623 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-config-data\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.848733 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60c3670-b331-4694-a090-36bea1a48307-logs\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.851871 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f60c3670-b331-4694-a090-36bea1a48307-horizon-secret-key\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.866148 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsxds\" (UniqueName: \"kubernetes.io/projected/f60c3670-b331-4694-a090-36bea1a48307-kube-api-access-xsxds\") pod \"horizon-844fcddd89-784bm\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.975178 4909 generic.go:334] "Generic (PLEG): container finished" podID="14496a14-234c-4375-afc5-12458c70e15e" containerID="c869c1689a7b93f6ae3d29aea0e22914960ed5f18a2790d81d704c91127bc141" exitCode=143 Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.975269 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14496a14-234c-4375-afc5-12458c70e15e","Type":"ContainerDied","Data":"c869c1689a7b93f6ae3d29aea0e22914960ed5f18a2790d81d704c91127bc141"} Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.977649 4909 generic.go:334] "Generic (PLEG): container finished" podID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerID="6d9a00e7734f3761e327649359f4afc22e17fb8d7b21852dc0ae28238eb666b5" exitCode=143 Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.977701 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7618e381-4984-4367-8cf4-69070b8c6fe5","Type":"ContainerDied","Data":"6d9a00e7734f3761e327649359f4afc22e17fb8d7b21852dc0ae28238eb666b5"} Nov 26 08:37:24 crc kubenswrapper[4909]: I1126 08:37:24.979396 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf6b4bd6f-r65dv" event={"ID":"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042","Type":"ContainerStarted","Data":"10b7df897f0e769456c6a0104bb35d40c0f457f781aed5f8d476d1891a16a6e5"} Nov 26 08:37:25 crc kubenswrapper[4909]: I1126 08:37:25.046455 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:25 crc kubenswrapper[4909]: I1126 08:37:25.089881 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fc574477c-hhjdg"] Nov 26 08:37:25 crc kubenswrapper[4909]: W1126 08:37:25.101363 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod363f2f18_8a03_44f4_b56e_68325a86247f.slice/crio-e104b40dd797a278e8e05bf76bcca52a7e9fcea0aa3e018d8d87ec2017527271 WatchSource:0}: Error finding container e104b40dd797a278e8e05bf76bcca52a7e9fcea0aa3e018d8d87ec2017527271: Status 404 returned error can't find the container with id e104b40dd797a278e8e05bf76bcca52a7e9fcea0aa3e018d8d87ec2017527271 Nov 26 08:37:25 crc kubenswrapper[4909]: I1126 08:37:25.475691 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-844fcddd89-784bm"] Nov 26 08:37:25 crc kubenswrapper[4909]: I1126 08:37:25.992236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844fcddd89-784bm" event={"ID":"f60c3670-b331-4694-a090-36bea1a48307","Type":"ContainerStarted","Data":"840cef685df8130e95e5850193993750d4d38e2a9b37083cf174258740b2177f"} Nov 26 08:37:25 crc kubenswrapper[4909]: I1126 08:37:25.994275 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fc574477c-hhjdg" event={"ID":"363f2f18-8a03-44f4-b56e-68325a86247f","Type":"ContainerStarted","Data":"e104b40dd797a278e8e05bf76bcca52a7e9fcea0aa3e018d8d87ec2017527271"} Nov 26 08:37:28 crc kubenswrapper[4909]: I1126 08:37:28.021860 4909 generic.go:334] "Generic (PLEG): container finished" podID="14496a14-234c-4375-afc5-12458c70e15e" containerID="453fb02790be82650f06c2805b40f324ace8ce078769e65b26a7189394db812f" exitCode=0 Nov 26 08:37:28 crc kubenswrapper[4909]: I1126 08:37:28.021976 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14496a14-234c-4375-afc5-12458c70e15e","Type":"ContainerDied","Data":"453fb02790be82650f06c2805b40f324ace8ce078769e65b26a7189394db812f"} Nov 26 08:37:28 crc kubenswrapper[4909]: I1126 08:37:28.024654 4909 generic.go:334] "Generic (PLEG): container finished" podID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerID="e1842706d0f176fca62a39b9ac85ca66b8d4aafcd5907fe9f3c3b1f6f4307760" exitCode=0 Nov 26 08:37:28 crc kubenswrapper[4909]: I1126 08:37:28.024685 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7618e381-4984-4367-8cf4-69070b8c6fe5","Type":"ContainerDied","Data":"e1842706d0f176fca62a39b9ac85ca66b8d4aafcd5907fe9f3c3b1f6f4307760"} Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.005749 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.073483 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7618e381-4984-4367-8cf4-69070b8c6fe5","Type":"ContainerDied","Data":"a9c0ded700f8362378666af19aac85361ccf7784b9a36b5b2a8954ac302aad04"} Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.073537 4909 scope.go:117] "RemoveContainer" containerID="e1842706d0f176fca62a39b9ac85ca66b8d4aafcd5907fe9f3c3b1f6f4307760" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.073692 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.076053 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fc574477c-hhjdg" event={"ID":"363f2f18-8a03-44f4-b56e-68325a86247f","Type":"ContainerStarted","Data":"0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987"} Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.079778 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf6b4bd6f-r65dv" event={"ID":"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042","Type":"ContainerStarted","Data":"029ce2bbc3b2a578cd70e9532c14a76a8bff7f2808e772f5acd4597ab26c3132"} Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.083213 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844fcddd89-784bm" event={"ID":"f60c3670-b331-4694-a090-36bea1a48307","Type":"ContainerStarted","Data":"5a2c7e376dd621080466e2cc928f88c2991367a397e6aafa1a581c8ff05ab4cb"} Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.102452 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-ceph\") pod \"7618e381-4984-4367-8cf4-69070b8c6fe5\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.102550 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-logs\") pod \"7618e381-4984-4367-8cf4-69070b8c6fe5\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.102621 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-846r8\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-kube-api-access-846r8\") pod \"7618e381-4984-4367-8cf4-69070b8c6fe5\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.102700 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-scripts\") pod \"7618e381-4984-4367-8cf4-69070b8c6fe5\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.102828 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-combined-ca-bundle\") pod \"7618e381-4984-4367-8cf4-69070b8c6fe5\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.102892 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-httpd-run\") pod \"7618e381-4984-4367-8cf4-69070b8c6fe5\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.102941 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-config-data\") pod \"7618e381-4984-4367-8cf4-69070b8c6fe5\" (UID: \"7618e381-4984-4367-8cf4-69070b8c6fe5\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.107128 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7618e381-4984-4367-8cf4-69070b8c6fe5" (UID: "7618e381-4984-4367-8cf4-69070b8c6fe5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.107197 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-logs" (OuterVolumeSpecName: "logs") pod "7618e381-4984-4367-8cf4-69070b8c6fe5" (UID: "7618e381-4984-4367-8cf4-69070b8c6fe5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.107369 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-ceph" (OuterVolumeSpecName: "ceph") pod "7618e381-4984-4367-8cf4-69070b8c6fe5" (UID: "7618e381-4984-4367-8cf4-69070b8c6fe5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.108412 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-kube-api-access-846r8" (OuterVolumeSpecName: "kube-api-access-846r8") pod "7618e381-4984-4367-8cf4-69070b8c6fe5" (UID: "7618e381-4984-4367-8cf4-69070b8c6fe5"). InnerVolumeSpecName "kube-api-access-846r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.111405 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-scripts" (OuterVolumeSpecName: "scripts") pod "7618e381-4984-4367-8cf4-69070b8c6fe5" (UID: "7618e381-4984-4367-8cf4-69070b8c6fe5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.121729 4909 scope.go:117] "RemoveContainer" containerID="6d9a00e7734f3761e327649359f4afc22e17fb8d7b21852dc0ae28238eb666b5" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.183203 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7618e381-4984-4367-8cf4-69070b8c6fe5" (UID: "7618e381-4984-4367-8cf4-69070b8c6fe5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.204942 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-846r8\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-kube-api-access-846r8\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.204970 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.204979 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.204986 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.204996 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7618e381-4984-4367-8cf4-69070b8c6fe5-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.205004 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7618e381-4984-4367-8cf4-69070b8c6fe5-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.207009 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-config-data" (OuterVolumeSpecName: "config-data") pod "7618e381-4984-4367-8cf4-69070b8c6fe5" (UID: "7618e381-4984-4367-8cf4-69070b8c6fe5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.306968 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7618e381-4984-4367-8cf4-69070b8c6fe5-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.418679 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.431033 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.454622 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:37:32 crc kubenswrapper[4909]: E1126 08:37:32.455085 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-httpd" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.455106 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-httpd" Nov 26 08:37:32 crc kubenswrapper[4909]: E1126 08:37:32.455155 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-log" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.455164 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-log" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.455346 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-httpd" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.455372 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" containerName="glance-log" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.456471 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.459077 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.464934 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.510867 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17136d61-f21a-46a1-a2ef-565bed7c032f-logs\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.511338 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.511359 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17136d61-f21a-46a1-a2ef-565bed7c032f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.511396 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.511455 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/17136d61-f21a-46a1-a2ef-565bed7c032f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.511482 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5j5\" (UniqueName: \"kubernetes.io/projected/17136d61-f21a-46a1-a2ef-565bed7c032f-kube-api-access-hj5j5\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.511524 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.512237 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7618e381-4984-4367-8cf4-69070b8c6fe5" path="/var/lib/kubelet/pods/7618e381-4984-4367-8cf4-69070b8c6fe5/volumes" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.613001 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.613042 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17136d61-f21a-46a1-a2ef-565bed7c032f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.613077 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.613143 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/17136d61-f21a-46a1-a2ef-565bed7c032f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.613170 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5j5\" (UniqueName: \"kubernetes.io/projected/17136d61-f21a-46a1-a2ef-565bed7c032f-kube-api-access-hj5j5\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.613206 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.613237 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17136d61-f21a-46a1-a2ef-565bed7c032f-logs\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.614473 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17136d61-f21a-46a1-a2ef-565bed7c032f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.614624 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17136d61-f21a-46a1-a2ef-565bed7c032f-logs\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.619159 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.620985 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.622140 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/17136d61-f21a-46a1-a2ef-565bed7c032f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.624131 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17136d61-f21a-46a1-a2ef-565bed7c032f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.634378 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5j5\" (UniqueName: \"kubernetes.io/projected/17136d61-f21a-46a1-a2ef-565bed7c032f-kube-api-access-hj5j5\") pod \"glance-default-internal-api-0\" (UID: \"17136d61-f21a-46a1-a2ef-565bed7c032f\") " pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.728061 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.775453 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.816033 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-scripts\") pod \"14496a14-234c-4375-afc5-12458c70e15e\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.816104 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v7bj\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-kube-api-access-9v7bj\") pod \"14496a14-234c-4375-afc5-12458c70e15e\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.816158 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-config-data\") pod \"14496a14-234c-4375-afc5-12458c70e15e\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.816232 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-logs\") pod \"14496a14-234c-4375-afc5-12458c70e15e\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.816340 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-httpd-run\") pod \"14496a14-234c-4375-afc5-12458c70e15e\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.816393 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-combined-ca-bundle\") pod \"14496a14-234c-4375-afc5-12458c70e15e\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.816423 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-ceph\") pod \"14496a14-234c-4375-afc5-12458c70e15e\" (UID: \"14496a14-234c-4375-afc5-12458c70e15e\") " Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.818016 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-logs" (OuterVolumeSpecName: "logs") pod "14496a14-234c-4375-afc5-12458c70e15e" (UID: "14496a14-234c-4375-afc5-12458c70e15e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.818769 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "14496a14-234c-4375-afc5-12458c70e15e" (UID: "14496a14-234c-4375-afc5-12458c70e15e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.821431 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-scripts" (OuterVolumeSpecName: "scripts") pod "14496a14-234c-4375-afc5-12458c70e15e" (UID: "14496a14-234c-4375-afc5-12458c70e15e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.823704 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-kube-api-access-9v7bj" (OuterVolumeSpecName: "kube-api-access-9v7bj") pod "14496a14-234c-4375-afc5-12458c70e15e" (UID: "14496a14-234c-4375-afc5-12458c70e15e"). InnerVolumeSpecName "kube-api-access-9v7bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.829056 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-ceph" (OuterVolumeSpecName: "ceph") pod "14496a14-234c-4375-afc5-12458c70e15e" (UID: "14496a14-234c-4375-afc5-12458c70e15e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.882714 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14496a14-234c-4375-afc5-12458c70e15e" (UID: "14496a14-234c-4375-afc5-12458c70e15e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.901807 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-config-data" (OuterVolumeSpecName: "config-data") pod "14496a14-234c-4375-afc5-12458c70e15e" (UID: "14496a14-234c-4375-afc5-12458c70e15e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.920866 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.920898 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v7bj\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-kube-api-access-9v7bj\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.920909 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.920917 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.920927 4909 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14496a14-234c-4375-afc5-12458c70e15e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.920949 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14496a14-234c-4375-afc5-12458c70e15e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:32 crc kubenswrapper[4909]: I1126 08:37:32.920958 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/14496a14-234c-4375-afc5-12458c70e15e-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.095305 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14496a14-234c-4375-afc5-12458c70e15e","Type":"ContainerDied","Data":"3b37ad1402453f083696f6073ecb9ec192db3e9fe7b2e24180ec2d6a2298d070"} Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.095347 4909 scope.go:117] "RemoveContainer" containerID="453fb02790be82650f06c2805b40f324ace8ce078769e65b26a7189394db812f" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.095451 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.103028 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fc574477c-hhjdg" event={"ID":"363f2f18-8a03-44f4-b56e-68325a86247f","Type":"ContainerStarted","Data":"d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d"} Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.112796 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf6b4bd6f-r65dv" event={"ID":"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042","Type":"ContainerStarted","Data":"58011a342f8fc7ba40896a78f413b847cf54bdccf43d9aa54b4cafd6669b7d39"} Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.112928 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bf6b4bd6f-r65dv" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon-log" containerID="cri-o://029ce2bbc3b2a578cd70e9532c14a76a8bff7f2808e772f5acd4597ab26c3132" gracePeriod=30 Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.113120 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bf6b4bd6f-r65dv" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon" containerID="cri-o://58011a342f8fc7ba40896a78f413b847cf54bdccf43d9aa54b4cafd6669b7d39" gracePeriod=30 Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.117065 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844fcddd89-784bm" event={"ID":"f60c3670-b331-4694-a090-36bea1a48307","Type":"ContainerStarted","Data":"5b8a522d3b3f036f06658f214ad156b9b7341fa4b260dc7d3f29b305baa5f098"} Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.131286 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-fc574477c-hhjdg" podStartSLOduration=2.528724418 podStartE2EDuration="9.131270069s" podCreationTimestamp="2025-11-26 08:37:24 +0000 UTC" firstStartedPulling="2025-11-26 08:37:25.103616216 +0000 UTC m=+5817.249827382" lastFinishedPulling="2025-11-26 08:37:31.706161847 +0000 UTC m=+5823.852373033" observedRunningTime="2025-11-26 08:37:33.124064492 +0000 UTC m=+5825.270275658" watchObservedRunningTime="2025-11-26 08:37:33.131270069 +0000 UTC m=+5825.277481235" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.149861 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bf6b4bd6f-r65dv" podStartSLOduration=3.292792466 podStartE2EDuration="10.149844746s" podCreationTimestamp="2025-11-26 08:37:23 +0000 UTC" firstStartedPulling="2025-11-26 08:37:24.815805688 +0000 UTC m=+5816.962016854" lastFinishedPulling="2025-11-26 08:37:31.672857948 +0000 UTC m=+5823.819069134" observedRunningTime="2025-11-26 08:37:33.145250261 +0000 UTC m=+5825.291461427" watchObservedRunningTime="2025-11-26 08:37:33.149844746 +0000 UTC m=+5825.296055912" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.153261 4909 scope.go:117] "RemoveContainer" containerID="c869c1689a7b93f6ae3d29aea0e22914960ed5f18a2790d81d704c91127bc141" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.165119 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-844fcddd89-784bm" podStartSLOduration=2.973708779 podStartE2EDuration="9.165101923s" podCreationTimestamp="2025-11-26 08:37:24 +0000 UTC" firstStartedPulling="2025-11-26 08:37:25.481290179 +0000 UTC m=+5817.627501345" lastFinishedPulling="2025-11-26 08:37:31.672683313 +0000 UTC m=+5823.818894489" observedRunningTime="2025-11-26 08:37:33.163719535 +0000 UTC m=+5825.309930701" watchObservedRunningTime="2025-11-26 08:37:33.165101923 +0000 UTC m=+5825.311313089" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.191274 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.199848 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.209202 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:37:33 crc kubenswrapper[4909]: E1126 08:37:33.209609 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-log" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.209625 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-log" Nov 26 08:37:33 crc kubenswrapper[4909]: E1126 08:37:33.209651 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-httpd" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.209658 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-httpd" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.209848 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-log" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.209875 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="14496a14-234c-4375-afc5-12458c70e15e" containerName="glance-httpd" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.210847 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.219929 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.221113 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.333629 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-config-data\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.333665 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16671caf-93a6-40ad-8f24-b053cb477b29-ceph\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.333977 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16671caf-93a6-40ad-8f24-b053cb477b29-logs\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.334129 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.334164 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16671caf-93a6-40ad-8f24-b053cb477b29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.334231 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-scripts\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.334323 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8cj\" (UniqueName: \"kubernetes.io/projected/16671caf-93a6-40ad-8f24-b053cb477b29-kube-api-access-tz8cj\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.384440 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 08:37:33 crc kubenswrapper[4909]: W1126 08:37:33.389697 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17136d61_f21a_46a1_a2ef_565bed7c032f.slice/crio-05befee418a1e97aa6863e2363c602e3315581b34ad6f47f2bf420900e9e147a WatchSource:0}: Error finding container 05befee418a1e97aa6863e2363c602e3315581b34ad6f47f2bf420900e9e147a: Status 404 returned error can't find the container with id 05befee418a1e97aa6863e2363c602e3315581b34ad6f47f2bf420900e9e147a Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.437657 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16671caf-93a6-40ad-8f24-b053cb477b29-logs\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.437737 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.437754 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16671caf-93a6-40ad-8f24-b053cb477b29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.437816 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-scripts\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.437860 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8cj\" (UniqueName: \"kubernetes.io/projected/16671caf-93a6-40ad-8f24-b053cb477b29-kube-api-access-tz8cj\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.437937 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-config-data\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.437954 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16671caf-93a6-40ad-8f24-b053cb477b29-ceph\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.439851 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16671caf-93a6-40ad-8f24-b053cb477b29-logs\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.439904 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16671caf-93a6-40ad-8f24-b053cb477b29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.446573 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16671caf-93a6-40ad-8f24-b053cb477b29-ceph\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.446926 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-config-data\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.446955 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.447576 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16671caf-93a6-40ad-8f24-b053cb477b29-scripts\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.467256 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8cj\" (UniqueName: \"kubernetes.io/projected/16671caf-93a6-40ad-8f24-b053cb477b29-kube-api-access-tz8cj\") pod \"glance-default-external-api-0\" (UID: \"16671caf-93a6-40ad-8f24-b053cb477b29\") " pod="openstack/glance-default-external-api-0" Nov 26 08:37:33 crc kubenswrapper[4909]: I1126 08:37:33.555821 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 08:37:34 crc kubenswrapper[4909]: I1126 08:37:34.129817 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"17136d61-f21a-46a1-a2ef-565bed7c032f","Type":"ContainerStarted","Data":"d68ef70590ed94e793eae07ec3dbe5c33e082c4e5a1ec4a15c41d9c273f930ee"} Nov 26 08:37:34 crc kubenswrapper[4909]: I1126 08:37:34.130408 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"17136d61-f21a-46a1-a2ef-565bed7c032f","Type":"ContainerStarted","Data":"05befee418a1e97aa6863e2363c602e3315581b34ad6f47f2bf420900e9e147a"} Nov 26 08:37:34 crc kubenswrapper[4909]: W1126 08:37:34.145038 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16671caf_93a6_40ad_8f24_b053cb477b29.slice/crio-d922b34294aed8df59d6ac0e2fc55fdafcaeb84e0e7fc18cf346beacd225b2ad WatchSource:0}: Error finding container d922b34294aed8df59d6ac0e2fc55fdafcaeb84e0e7fc18cf346beacd225b2ad: Status 404 returned error can't find the container with id d922b34294aed8df59d6ac0e2fc55fdafcaeb84e0e7fc18cf346beacd225b2ad Nov 26 08:37:34 crc kubenswrapper[4909]: I1126 08:37:34.147167 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 08:37:34 crc kubenswrapper[4909]: I1126 08:37:34.355052 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:37:34 crc kubenswrapper[4909]: I1126 08:37:34.516651 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14496a14-234c-4375-afc5-12458c70e15e" path="/var/lib/kubelet/pods/14496a14-234c-4375-afc5-12458c70e15e/volumes" Nov 26 08:37:34 crc kubenswrapper[4909]: I1126 08:37:34.519636 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:34 crc kubenswrapper[4909]: I1126 08:37:34.519694 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:35 crc kubenswrapper[4909]: I1126 08:37:35.046672 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:35 crc kubenswrapper[4909]: I1126 08:37:35.046972 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:35 crc kubenswrapper[4909]: I1126 08:37:35.154390 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"17136d61-f21a-46a1-a2ef-565bed7c032f","Type":"ContainerStarted","Data":"87a5c6f2f35cb1f9d70426b7cff856d9aeefa4ade0717148c848bdb03871b344"} Nov 26 08:37:35 crc kubenswrapper[4909]: I1126 08:37:35.157835 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"16671caf-93a6-40ad-8f24-b053cb477b29","Type":"ContainerStarted","Data":"844b7b650af8192d8bed3d987cc39d056fa1a2510bbb96d848c022aff3427de2"} Nov 26 08:37:35 crc kubenswrapper[4909]: I1126 08:37:35.157861 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"16671caf-93a6-40ad-8f24-b053cb477b29","Type":"ContainerStarted","Data":"d922b34294aed8df59d6ac0e2fc55fdafcaeb84e0e7fc18cf346beacd225b2ad"} Nov 26 08:37:35 crc kubenswrapper[4909]: I1126 08:37:35.185228 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.185209781 podStartE2EDuration="3.185209781s" podCreationTimestamp="2025-11-26 08:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:37:35.176026431 +0000 UTC m=+5827.322237597" watchObservedRunningTime="2025-11-26 08:37:35.185209781 +0000 UTC m=+5827.331420947" Nov 26 08:37:35 crc kubenswrapper[4909]: I1126 08:37:35.499141 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:37:35 crc kubenswrapper[4909]: E1126 08:37:35.499436 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:37:36 crc kubenswrapper[4909]: I1126 08:37:36.173378 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"16671caf-93a6-40ad-8f24-b053cb477b29","Type":"ContainerStarted","Data":"54adafc46742d11402e84ddcdd36c803f526b886accd9f62bc7fd732e7660acf"} Nov 26 08:37:36 crc kubenswrapper[4909]: I1126 08:37:36.212183 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.212164102 podStartE2EDuration="3.212164102s" podCreationTimestamp="2025-11-26 08:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:37:36.191660382 +0000 UTC m=+5828.337871598" watchObservedRunningTime="2025-11-26 08:37:36.212164102 +0000 UTC m=+5828.358375268" Nov 26 08:37:39 crc kubenswrapper[4909]: I1126 08:37:39.051625 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qb2cf"] Nov 26 08:37:39 crc kubenswrapper[4909]: I1126 08:37:39.066352 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qb2cf"] Nov 26 08:37:39 crc kubenswrapper[4909]: I1126 08:37:39.467061 4909 scope.go:117] "RemoveContainer" containerID="b512a3a0c5b7b825c9d595fc05d34fcc60a710b503c26ea378c8f406dd86fad5" Nov 26 08:37:39 crc kubenswrapper[4909]: I1126 08:37:39.541856 4909 scope.go:117] "RemoveContainer" containerID="c05af6cab615ccc15c9bb0cfd6a24f63508d820812aab49b09346124b9bd3914" Nov 26 08:37:40 crc kubenswrapper[4909]: I1126 08:37:40.518547 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be893ffe-0db2-4130-bff6-50da7ff31f66" path="/var/lib/kubelet/pods/be893ffe-0db2-4130-bff6-50da7ff31f66/volumes" Nov 26 08:37:42 crc kubenswrapper[4909]: I1126 08:37:42.775927 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:42 crc kubenswrapper[4909]: I1126 08:37:42.776553 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:42 crc kubenswrapper[4909]: I1126 08:37:42.827803 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:42 crc kubenswrapper[4909]: I1126 08:37:42.854795 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:43 crc kubenswrapper[4909]: I1126 08:37:43.267894 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:43 crc kubenswrapper[4909]: I1126 08:37:43.267940 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:43 crc kubenswrapper[4909]: I1126 08:37:43.557065 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 08:37:43 crc kubenswrapper[4909]: I1126 08:37:43.557129 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 08:37:43 crc kubenswrapper[4909]: I1126 08:37:43.593272 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 08:37:43 crc kubenswrapper[4909]: I1126 08:37:43.601689 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 08:37:44 crc kubenswrapper[4909]: I1126 08:37:44.278874 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 08:37:44 crc kubenswrapper[4909]: I1126 08:37:44.279239 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 08:37:44 crc kubenswrapper[4909]: I1126 08:37:44.519548 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fc574477c-hhjdg" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.124:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8080: connect: connection refused" Nov 26 08:37:45 crc kubenswrapper[4909]: I1126 08:37:45.049876 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-844fcddd89-784bm" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.125:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8080: connect: connection refused" Nov 26 08:37:45 crc kubenswrapper[4909]: I1126 08:37:45.247086 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:45 crc kubenswrapper[4909]: I1126 08:37:45.282032 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 08:37:46 crc kubenswrapper[4909]: I1126 08:37:46.299076 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 08:37:46 crc kubenswrapper[4909]: I1126 08:37:46.299356 4909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 08:37:46 crc kubenswrapper[4909]: I1126 08:37:46.329300 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 08:37:46 crc kubenswrapper[4909]: I1126 08:37:46.825269 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 08:37:49 crc kubenswrapper[4909]: I1126 08:37:49.053811 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c56e-account-create-w5dkn"] Nov 26 08:37:49 crc kubenswrapper[4909]: I1126 08:37:49.062920 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c56e-account-create-w5dkn"] Nov 26 08:37:50 crc kubenswrapper[4909]: I1126 08:37:50.498974 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:37:50 crc kubenswrapper[4909]: E1126 08:37:50.500354 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:37:50 crc kubenswrapper[4909]: I1126 08:37:50.523922 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c4aa50-bced-4761-984b-ff6e82851af7" path="/var/lib/kubelet/pods/25c4aa50-bced-4761-984b-ff6e82851af7/volumes" Nov 26 08:37:56 crc kubenswrapper[4909]: I1126 08:37:56.396267 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:56 crc kubenswrapper[4909]: I1126 08:37:56.979253 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.029057 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.040760 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jstcz"] Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.068696 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jstcz"] Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.513012 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b07df5-f911-4f65-bc31-da797bb04f2e" path="/var/lib/kubelet/pods/93b07df5-f911-4f65-bc31-da797bb04f2e/volumes" Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.664541 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.800892 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fc574477c-hhjdg"] Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.801740 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fc574477c-hhjdg" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" containerID="cri-o://d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d" gracePeriod=30 Nov 26 08:37:58 crc kubenswrapper[4909]: I1126 08:37:58.801525 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fc574477c-hhjdg" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon-log" containerID="cri-o://0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987" gracePeriod=30 Nov 26 08:38:01 crc kubenswrapper[4909]: I1126 08:38:01.498875 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:38:01 crc kubenswrapper[4909]: E1126 08:38:01.499318 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:38:02 crc kubenswrapper[4909]: I1126 08:38:02.494292 4909 generic.go:334] "Generic (PLEG): container finished" podID="363f2f18-8a03-44f4-b56e-68325a86247f" containerID="d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d" exitCode=0 Nov 26 08:38:02 crc kubenswrapper[4909]: I1126 08:38:02.494335 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fc574477c-hhjdg" event={"ID":"363f2f18-8a03-44f4-b56e-68325a86247f","Type":"ContainerDied","Data":"d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d"} Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.512829 4909 generic.go:334] "Generic (PLEG): container finished" podID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerID="58011a342f8fc7ba40896a78f413b847cf54bdccf43d9aa54b4cafd6669b7d39" exitCode=137 Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.513127 4909 generic.go:334] "Generic (PLEG): container finished" podID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerID="029ce2bbc3b2a578cd70e9532c14a76a8bff7f2808e772f5acd4597ab26c3132" exitCode=137 Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.512918 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf6b4bd6f-r65dv" event={"ID":"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042","Type":"ContainerDied","Data":"58011a342f8fc7ba40896a78f413b847cf54bdccf43d9aa54b4cafd6669b7d39"} Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.513164 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf6b4bd6f-r65dv" event={"ID":"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042","Type":"ContainerDied","Data":"029ce2bbc3b2a578cd70e9532c14a76a8bff7f2808e772f5acd4597ab26c3132"} Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.611348 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.726438 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbps5\" (UniqueName: \"kubernetes.io/projected/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-kube-api-access-vbps5\") pod \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.726543 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-config-data\") pod \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.727265 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-logs" (OuterVolumeSpecName: "logs") pod "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" (UID: "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.726776 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-logs\") pod \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.727438 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-horizon-secret-key\") pod \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.728062 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-scripts\") pod \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\" (UID: \"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042\") " Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.728958 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.735435 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" (UID: "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.735507 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-kube-api-access-vbps5" (OuterVolumeSpecName: "kube-api-access-vbps5") pod "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" (UID: "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042"). InnerVolumeSpecName "kube-api-access-vbps5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.752467 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-config-data" (OuterVolumeSpecName: "config-data") pod "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" (UID: "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.770373 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-scripts" (OuterVolumeSpecName: "scripts") pod "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" (UID: "f7fd8060-0b5f-4b49-b3cf-aa2cf8170042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.831012 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbps5\" (UniqueName: \"kubernetes.io/projected/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-kube-api-access-vbps5\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.831049 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.831062 4909 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:03 crc kubenswrapper[4909]: I1126 08:38:03.831076 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:04 crc kubenswrapper[4909]: I1126 08:38:04.517504 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fc574477c-hhjdg" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.124:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8080: connect: connection refused" Nov 26 08:38:04 crc kubenswrapper[4909]: I1126 08:38:04.525559 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bf6b4bd6f-r65dv" event={"ID":"f7fd8060-0b5f-4b49-b3cf-aa2cf8170042","Type":"ContainerDied","Data":"10b7df897f0e769456c6a0104bb35d40c0f457f781aed5f8d476d1891a16a6e5"} Nov 26 08:38:04 crc kubenswrapper[4909]: I1126 08:38:04.525639 4909 scope.go:117] "RemoveContainer" containerID="58011a342f8fc7ba40896a78f413b847cf54bdccf43d9aa54b4cafd6669b7d39" Nov 26 08:38:04 crc kubenswrapper[4909]: I1126 08:38:04.525672 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bf6b4bd6f-r65dv" Nov 26 08:38:04 crc kubenswrapper[4909]: I1126 08:38:04.570537 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bf6b4bd6f-r65dv"] Nov 26 08:38:04 crc kubenswrapper[4909]: I1126 08:38:04.578571 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bf6b4bd6f-r65dv"] Nov 26 08:38:04 crc kubenswrapper[4909]: I1126 08:38:04.743025 4909 scope.go:117] "RemoveContainer" containerID="029ce2bbc3b2a578cd70e9532c14a76a8bff7f2808e772f5acd4597ab26c3132" Nov 26 08:38:06 crc kubenswrapper[4909]: I1126 08:38:06.526665 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" path="/var/lib/kubelet/pods/f7fd8060-0b5f-4b49-b3cf-aa2cf8170042/volumes" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.837322 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6bd886c577-ttt6q"] Nov 26 08:38:11 crc kubenswrapper[4909]: E1126 08:38:11.838400 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.838420 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon" Nov 26 08:38:11 crc kubenswrapper[4909]: E1126 08:38:11.838444 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon-log" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.838453 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon-log" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.838705 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon-log" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.838737 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fd8060-0b5f-4b49-b3cf-aa2cf8170042" containerName="horizon" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.840056 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.852523 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bd886c577-ttt6q"] Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.936652 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ll2\" (UniqueName: \"kubernetes.io/projected/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-kube-api-access-x6ll2\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.936755 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-scripts\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.936809 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-logs\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.937062 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-horizon-secret-key\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:11 crc kubenswrapper[4909]: I1126 08:38:11.937202 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-config-data\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.039475 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-scripts\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.039616 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-logs\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.039754 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-horizon-secret-key\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.039828 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-config-data\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.039879 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ll2\" (UniqueName: \"kubernetes.io/projected/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-kube-api-access-x6ll2\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.040094 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-logs\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.040227 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-scripts\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.041021 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-config-data\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.047273 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-horizon-secret-key\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.069619 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ll2\" (UniqueName: \"kubernetes.io/projected/d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb-kube-api-access-x6ll2\") pod \"horizon-6bd886c577-ttt6q\" (UID: \"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb\") " pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.157451 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:12 crc kubenswrapper[4909]: I1126 08:38:12.699502 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bd886c577-ttt6q"] Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.093716 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-9nhsg"] Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.095660 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9nhsg" Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.107168 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9nhsg"] Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.269969 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttxr\" (UniqueName: \"kubernetes.io/projected/03e45ff8-93e7-4ff0-a81e-c5b8236e22d8-kube-api-access-xttxr\") pod \"heat-db-create-9nhsg\" (UID: \"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8\") " pod="openstack/heat-db-create-9nhsg" Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.371450 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xttxr\" (UniqueName: \"kubernetes.io/projected/03e45ff8-93e7-4ff0-a81e-c5b8236e22d8-kube-api-access-xttxr\") pod \"heat-db-create-9nhsg\" (UID: \"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8\") " pod="openstack/heat-db-create-9nhsg" Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.390699 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xttxr\" (UniqueName: \"kubernetes.io/projected/03e45ff8-93e7-4ff0-a81e-c5b8236e22d8-kube-api-access-xttxr\") pod \"heat-db-create-9nhsg\" (UID: \"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8\") " pod="openstack/heat-db-create-9nhsg" Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.415159 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9nhsg" Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.622529 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bd886c577-ttt6q" event={"ID":"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb","Type":"ContainerStarted","Data":"157c379611dfe76dd9317a3768fbade6e90e18fdb8a57b4e7360ba367ff736e2"} Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.622817 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bd886c577-ttt6q" event={"ID":"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb","Type":"ContainerStarted","Data":"e4669e9805f8804ab269d7634a7f574d6b5d33603279b86ac557f56899fcf67a"} Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.622827 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bd886c577-ttt6q" event={"ID":"d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb","Type":"ContainerStarted","Data":"17f351c44add8f721f41b09852eca6e6ff5e501f8cbdd5a10d6675f4191ebece"} Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.645733 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6bd886c577-ttt6q" podStartSLOduration=2.645715003 podStartE2EDuration="2.645715003s" podCreationTimestamp="2025-11-26 08:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:38:13.643038369 +0000 UTC m=+5865.789249535" watchObservedRunningTime="2025-11-26 08:38:13.645715003 +0000 UTC m=+5865.791926169" Nov 26 08:38:13 crc kubenswrapper[4909]: I1126 08:38:13.908178 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9nhsg"] Nov 26 08:38:13 crc kubenswrapper[4909]: W1126 08:38:13.916468 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03e45ff8_93e7_4ff0_a81e_c5b8236e22d8.slice/crio-aee4b41b822479d5d870d7546b7245705951ba81d1cbdcb144a515ebe9400f19 WatchSource:0}: Error finding container aee4b41b822479d5d870d7546b7245705951ba81d1cbdcb144a515ebe9400f19: Status 404 returned error can't find the container with id aee4b41b822479d5d870d7546b7245705951ba81d1cbdcb144a515ebe9400f19 Nov 26 08:38:14 crc kubenswrapper[4909]: I1126 08:38:14.517273 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fc574477c-hhjdg" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.124:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8080: connect: connection refused" Nov 26 08:38:14 crc kubenswrapper[4909]: I1126 08:38:14.633614 4909 generic.go:334] "Generic (PLEG): container finished" podID="03e45ff8-93e7-4ff0-a81e-c5b8236e22d8" containerID="4dea50062af10be4784171c7ec91c1635a1e085821cb27669534fba1d44b53e9" exitCode=0 Nov 26 08:38:14 crc kubenswrapper[4909]: I1126 08:38:14.634910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9nhsg" event={"ID":"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8","Type":"ContainerDied","Data":"4dea50062af10be4784171c7ec91c1635a1e085821cb27669534fba1d44b53e9"} Nov 26 08:38:14 crc kubenswrapper[4909]: I1126 08:38:14.634938 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9nhsg" event={"ID":"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8","Type":"ContainerStarted","Data":"aee4b41b822479d5d870d7546b7245705951ba81d1cbdcb144a515ebe9400f19"} Nov 26 08:38:15 crc kubenswrapper[4909]: I1126 08:38:15.500580 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:38:15 crc kubenswrapper[4909]: E1126 08:38:15.501182 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:38:16 crc kubenswrapper[4909]: I1126 08:38:16.089000 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9nhsg" Nov 26 08:38:16 crc kubenswrapper[4909]: I1126 08:38:16.227785 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xttxr\" (UniqueName: \"kubernetes.io/projected/03e45ff8-93e7-4ff0-a81e-c5b8236e22d8-kube-api-access-xttxr\") pod \"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8\" (UID: \"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8\") " Nov 26 08:38:16 crc kubenswrapper[4909]: I1126 08:38:16.233546 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e45ff8-93e7-4ff0-a81e-c5b8236e22d8-kube-api-access-xttxr" (OuterVolumeSpecName: "kube-api-access-xttxr") pod "03e45ff8-93e7-4ff0-a81e-c5b8236e22d8" (UID: "03e45ff8-93e7-4ff0-a81e-c5b8236e22d8"). InnerVolumeSpecName "kube-api-access-xttxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:38:16 crc kubenswrapper[4909]: I1126 08:38:16.330569 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xttxr\" (UniqueName: \"kubernetes.io/projected/03e45ff8-93e7-4ff0-a81e-c5b8236e22d8-kube-api-access-xttxr\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:16 crc kubenswrapper[4909]: I1126 08:38:16.659561 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9nhsg" event={"ID":"03e45ff8-93e7-4ff0-a81e-c5b8236e22d8","Type":"ContainerDied","Data":"aee4b41b822479d5d870d7546b7245705951ba81d1cbdcb144a515ebe9400f19"} Nov 26 08:38:16 crc kubenswrapper[4909]: I1126 08:38:16.659625 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aee4b41b822479d5d870d7546b7245705951ba81d1cbdcb144a515ebe9400f19" Nov 26 08:38:16 crc kubenswrapper[4909]: I1126 08:38:16.659684 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9nhsg" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.726851 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqdth"] Nov 26 08:38:18 crc kubenswrapper[4909]: E1126 08:38:18.728216 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e45ff8-93e7-4ff0-a81e-c5b8236e22d8" containerName="mariadb-database-create" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.728248 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e45ff8-93e7-4ff0-a81e-c5b8236e22d8" containerName="mariadb-database-create" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.729745 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e45ff8-93e7-4ff0-a81e-c5b8236e22d8" containerName="mariadb-database-create" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.735124 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.736432 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqdth"] Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.886745 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-catalog-content\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.887017 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-utilities\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.887051 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xck5\" (UniqueName: \"kubernetes.io/projected/163ab6c1-dae7-4980-9104-08e32bee36aa-kube-api-access-8xck5\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.988801 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-utilities\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.988865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xck5\" (UniqueName: \"kubernetes.io/projected/163ab6c1-dae7-4980-9104-08e32bee36aa-kube-api-access-8xck5\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.988943 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-catalog-content\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.989418 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-utilities\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:18 crc kubenswrapper[4909]: I1126 08:38:18.989622 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-catalog-content\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:19 crc kubenswrapper[4909]: I1126 08:38:19.034451 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xck5\" (UniqueName: \"kubernetes.io/projected/163ab6c1-dae7-4980-9104-08e32bee36aa-kube-api-access-8xck5\") pod \"redhat-operators-hqdth\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:19 crc kubenswrapper[4909]: I1126 08:38:19.065025 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:19 crc kubenswrapper[4909]: I1126 08:38:19.571261 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqdth"] Nov 26 08:38:19 crc kubenswrapper[4909]: I1126 08:38:19.690309 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqdth" event={"ID":"163ab6c1-dae7-4980-9104-08e32bee36aa","Type":"ContainerStarted","Data":"e36ce71031fb195016fd3ba33b33be7355ab0b6528afef5feb4b663ee9ad991d"} Nov 26 08:38:20 crc kubenswrapper[4909]: I1126 08:38:20.704445 4909 generic.go:334] "Generic (PLEG): container finished" podID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerID="ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d" exitCode=0 Nov 26 08:38:20 crc kubenswrapper[4909]: I1126 08:38:20.704722 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqdth" event={"ID":"163ab6c1-dae7-4980-9104-08e32bee36aa","Type":"ContainerDied","Data":"ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d"} Nov 26 08:38:21 crc kubenswrapper[4909]: I1126 08:38:21.726094 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqdth" event={"ID":"163ab6c1-dae7-4980-9104-08e32bee36aa","Type":"ContainerStarted","Data":"a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b"} Nov 26 08:38:22 crc kubenswrapper[4909]: I1126 08:38:22.158140 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:22 crc kubenswrapper[4909]: I1126 08:38:22.158214 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:22 crc kubenswrapper[4909]: I1126 08:38:22.161236 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bd886c577-ttt6q" podUID="d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.128:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.128:8080: connect: connection refused" Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.217549 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-5def-account-create-n6s9q"] Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.219882 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5def-account-create-n6s9q" Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.222131 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.234055 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5def-account-create-n6s9q"] Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.376028 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/580fbd26-06b0-4964-8be0-6b93f1b99690-kube-api-access-g82bz\") pod \"heat-5def-account-create-n6s9q\" (UID: \"580fbd26-06b0-4964-8be0-6b93f1b99690\") " pod="openstack/heat-5def-account-create-n6s9q" Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.478073 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/580fbd26-06b0-4964-8be0-6b93f1b99690-kube-api-access-g82bz\") pod \"heat-5def-account-create-n6s9q\" (UID: \"580fbd26-06b0-4964-8be0-6b93f1b99690\") " pod="openstack/heat-5def-account-create-n6s9q" Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.502894 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/580fbd26-06b0-4964-8be0-6b93f1b99690-kube-api-access-g82bz\") pod \"heat-5def-account-create-n6s9q\" (UID: \"580fbd26-06b0-4964-8be0-6b93f1b99690\") " pod="openstack/heat-5def-account-create-n6s9q" Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.552445 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5def-account-create-n6s9q" Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.750174 4909 generic.go:334] "Generic (PLEG): container finished" podID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerID="a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b" exitCode=0 Nov 26 08:38:23 crc kubenswrapper[4909]: I1126 08:38:23.750224 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqdth" event={"ID":"163ab6c1-dae7-4980-9104-08e32bee36aa","Type":"ContainerDied","Data":"a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b"} Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.338059 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5def-account-create-n6s9q"] Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.517209 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fc574477c-hhjdg" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.124:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8080: connect: connection refused" Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.517834 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.760915 4909 generic.go:334] "Generic (PLEG): container finished" podID="580fbd26-06b0-4964-8be0-6b93f1b99690" containerID="ac55f08cc9d834a79f32abe463ee43272e603431894cc6753b4e9195a5e37730" exitCode=0 Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.761007 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5def-account-create-n6s9q" event={"ID":"580fbd26-06b0-4964-8be0-6b93f1b99690","Type":"ContainerDied","Data":"ac55f08cc9d834a79f32abe463ee43272e603431894cc6753b4e9195a5e37730"} Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.761037 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5def-account-create-n6s9q" event={"ID":"580fbd26-06b0-4964-8be0-6b93f1b99690","Type":"ContainerStarted","Data":"382b7ef517d07c7165d02da9af4a1611897c3402ee9572757fc5290cdac9ca76"} Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.763738 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqdth" event={"ID":"163ab6c1-dae7-4980-9104-08e32bee36aa","Type":"ContainerStarted","Data":"9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c"} Nov 26 08:38:24 crc kubenswrapper[4909]: I1126 08:38:24.810266 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqdth" podStartSLOduration=3.301244894 podStartE2EDuration="6.810249596s" podCreationTimestamp="2025-11-26 08:38:18 +0000 UTC" firstStartedPulling="2025-11-26 08:38:20.707390749 +0000 UTC m=+5872.853601915" lastFinishedPulling="2025-11-26 08:38:24.216395441 +0000 UTC m=+5876.362606617" observedRunningTime="2025-11-26 08:38:24.799241696 +0000 UTC m=+5876.945452862" watchObservedRunningTime="2025-11-26 08:38:24.810249596 +0000 UTC m=+5876.956460762" Nov 26 08:38:26 crc kubenswrapper[4909]: I1126 08:38:26.181011 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5def-account-create-n6s9q" Nov 26 08:38:26 crc kubenswrapper[4909]: I1126 08:38:26.246976 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/580fbd26-06b0-4964-8be0-6b93f1b99690-kube-api-access-g82bz\") pod \"580fbd26-06b0-4964-8be0-6b93f1b99690\" (UID: \"580fbd26-06b0-4964-8be0-6b93f1b99690\") " Nov 26 08:38:26 crc kubenswrapper[4909]: I1126 08:38:26.253381 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580fbd26-06b0-4964-8be0-6b93f1b99690-kube-api-access-g82bz" (OuterVolumeSpecName: "kube-api-access-g82bz") pod "580fbd26-06b0-4964-8be0-6b93f1b99690" (UID: "580fbd26-06b0-4964-8be0-6b93f1b99690"). InnerVolumeSpecName "kube-api-access-g82bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:38:26 crc kubenswrapper[4909]: I1126 08:38:26.349472 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/580fbd26-06b0-4964-8be0-6b93f1b99690-kube-api-access-g82bz\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:26 crc kubenswrapper[4909]: I1126 08:38:26.790547 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5def-account-create-n6s9q" event={"ID":"580fbd26-06b0-4964-8be0-6b93f1b99690","Type":"ContainerDied","Data":"382b7ef517d07c7165d02da9af4a1611897c3402ee9572757fc5290cdac9ca76"} Nov 26 08:38:26 crc kubenswrapper[4909]: I1126 08:38:26.790574 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5def-account-create-n6s9q" Nov 26 08:38:26 crc kubenswrapper[4909]: I1126 08:38:26.790614 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="382b7ef517d07c7165d02da9af4a1611897c3402ee9572757fc5290cdac9ca76" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.364198 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-nttlz"] Nov 26 08:38:28 crc kubenswrapper[4909]: E1126 08:38:28.364897 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580fbd26-06b0-4964-8be0-6b93f1b99690" containerName="mariadb-account-create" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.364919 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="580fbd26-06b0-4964-8be0-6b93f1b99690" containerName="mariadb-account-create" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.365299 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="580fbd26-06b0-4964-8be0-6b93f1b99690" containerName="mariadb-account-create" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.366367 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.372887 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-njgqp" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.373867 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.446820 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nttlz"] Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.536865 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zz7d\" (UniqueName: \"kubernetes.io/projected/55e4e5e9-e628-4196-99c3-d882790cf706-kube-api-access-4zz7d\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.536996 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-config-data\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.537034 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-combined-ca-bundle\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.638657 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-config-data\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.638715 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-combined-ca-bundle\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.638832 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zz7d\" (UniqueName: \"kubernetes.io/projected/55e4e5e9-e628-4196-99c3-d882790cf706-kube-api-access-4zz7d\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.641514 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.655215 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-config-data\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.658157 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-combined-ca-bundle\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.661933 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zz7d\" (UniqueName: \"kubernetes.io/projected/55e4e5e9-e628-4196-99c3-d882790cf706-kube-api-access-4zz7d\") pod \"heat-db-sync-nttlz\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.715562 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-njgqp" Nov 26 08:38:28 crc kubenswrapper[4909]: I1126 08:38:28.724547 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.065621 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.066099 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.207902 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nttlz"] Nov 26 08:38:29 crc kubenswrapper[4909]: W1126 08:38:29.215173 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55e4e5e9_e628_4196_99c3_d882790cf706.slice/crio-a81fed809244079adc97e60eeffe1cf3f4647782599f432b4cb6c96aafb78e25 WatchSource:0}: Error finding container a81fed809244079adc97e60eeffe1cf3f4647782599f432b4cb6c96aafb78e25: Status 404 returned error can't find the container with id a81fed809244079adc97e60eeffe1cf3f4647782599f432b4cb6c96aafb78e25 Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.224654 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.359319 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/363f2f18-8a03-44f4-b56e-68325a86247f-horizon-secret-key\") pod \"363f2f18-8a03-44f4-b56e-68325a86247f\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.359484 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363f2f18-8a03-44f4-b56e-68325a86247f-logs\") pod \"363f2f18-8a03-44f4-b56e-68325a86247f\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.359536 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-config-data\") pod \"363f2f18-8a03-44f4-b56e-68325a86247f\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.359559 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n4l2\" (UniqueName: \"kubernetes.io/projected/363f2f18-8a03-44f4-b56e-68325a86247f-kube-api-access-8n4l2\") pod \"363f2f18-8a03-44f4-b56e-68325a86247f\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.360267 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363f2f18-8a03-44f4-b56e-68325a86247f-logs" (OuterVolumeSpecName: "logs") pod "363f2f18-8a03-44f4-b56e-68325a86247f" (UID: "363f2f18-8a03-44f4-b56e-68325a86247f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.360687 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-scripts\") pod \"363f2f18-8a03-44f4-b56e-68325a86247f\" (UID: \"363f2f18-8a03-44f4-b56e-68325a86247f\") " Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.366262 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363f2f18-8a03-44f4-b56e-68325a86247f-kube-api-access-8n4l2" (OuterVolumeSpecName: "kube-api-access-8n4l2") pod "363f2f18-8a03-44f4-b56e-68325a86247f" (UID: "363f2f18-8a03-44f4-b56e-68325a86247f"). InnerVolumeSpecName "kube-api-access-8n4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.366323 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363f2f18-8a03-44f4-b56e-68325a86247f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "363f2f18-8a03-44f4-b56e-68325a86247f" (UID: "363f2f18-8a03-44f4-b56e-68325a86247f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.392006 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-scripts" (OuterVolumeSpecName: "scripts") pod "363f2f18-8a03-44f4-b56e-68325a86247f" (UID: "363f2f18-8a03-44f4-b56e-68325a86247f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.399486 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-config-data" (OuterVolumeSpecName: "config-data") pod "363f2f18-8a03-44f4-b56e-68325a86247f" (UID: "363f2f18-8a03-44f4-b56e-68325a86247f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.462461 4909 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/363f2f18-8a03-44f4-b56e-68325a86247f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.462491 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363f2f18-8a03-44f4-b56e-68325a86247f-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.462500 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.462511 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n4l2\" (UniqueName: \"kubernetes.io/projected/363f2f18-8a03-44f4-b56e-68325a86247f-kube-api-access-8n4l2\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.462520 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/363f2f18-8a03-44f4-b56e-68325a86247f-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.828213 4909 generic.go:334] "Generic (PLEG): container finished" podID="363f2f18-8a03-44f4-b56e-68325a86247f" containerID="0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987" exitCode=137 Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.828256 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fc574477c-hhjdg" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.828276 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fc574477c-hhjdg" event={"ID":"363f2f18-8a03-44f4-b56e-68325a86247f","Type":"ContainerDied","Data":"0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987"} Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.828805 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fc574477c-hhjdg" event={"ID":"363f2f18-8a03-44f4-b56e-68325a86247f","Type":"ContainerDied","Data":"e104b40dd797a278e8e05bf76bcca52a7e9fcea0aa3e018d8d87ec2017527271"} Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.828841 4909 scope.go:117] "RemoveContainer" containerID="d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d" Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.831176 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nttlz" event={"ID":"55e4e5e9-e628-4196-99c3-d882790cf706","Type":"ContainerStarted","Data":"a81fed809244079adc97e60eeffe1cf3f4647782599f432b4cb6c96aafb78e25"} Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.882448 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fc574477c-hhjdg"] Nov 26 08:38:29 crc kubenswrapper[4909]: I1126 08:38:29.897382 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-fc574477c-hhjdg"] Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.043429 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-42pmh"] Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.053966 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-42pmh"] Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.068697 4909 scope.go:117] "RemoveContainer" containerID="0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987" Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.097481 4909 scope.go:117] "RemoveContainer" containerID="d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d" Nov 26 08:38:30 crc kubenswrapper[4909]: E1126 08:38:30.097919 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d\": container with ID starting with d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d not found: ID does not exist" containerID="d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d" Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.097956 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d"} err="failed to get container status \"d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d\": rpc error: code = NotFound desc = could not find container \"d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d\": container with ID starting with d8336b307e5428cf788c4ef58b43c5538b146f724ea5f1807cf9109e32d2d92d not found: ID does not exist" Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.097980 4909 scope.go:117] "RemoveContainer" containerID="0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987" Nov 26 08:38:30 crc kubenswrapper[4909]: E1126 08:38:30.098422 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987\": container with ID starting with 0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987 not found: ID does not exist" containerID="0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987" Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.098468 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987"} err="failed to get container status \"0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987\": rpc error: code = NotFound desc = could not find container \"0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987\": container with ID starting with 0254dedd9bf813e779ae9681cdc904c1ec18eb0fa40ce80420cb486a1a798987 not found: ID does not exist" Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.117760 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqdth" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="registry-server" probeResult="failure" output=< Nov 26 08:38:30 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 08:38:30 crc kubenswrapper[4909]: > Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.498990 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:38:30 crc kubenswrapper[4909]: E1126 08:38:30.499428 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.510954 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20351e45-309d-4c1d-8dda-6b18add05075" path="/var/lib/kubelet/pods/20351e45-309d-4c1d-8dda-6b18add05075/volumes" Nov 26 08:38:30 crc kubenswrapper[4909]: I1126 08:38:30.511753 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" path="/var/lib/kubelet/pods/363f2f18-8a03-44f4-b56e-68325a86247f/volumes" Nov 26 08:38:34 crc kubenswrapper[4909]: I1126 08:38:34.124585 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:35 crc kubenswrapper[4909]: I1126 08:38:35.974539 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6bd886c577-ttt6q" Nov 26 08:38:36 crc kubenswrapper[4909]: I1126 08:38:36.059148 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-844fcddd89-784bm"] Nov 26 08:38:36 crc kubenswrapper[4909]: I1126 08:38:36.059511 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-844fcddd89-784bm" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon-log" containerID="cri-o://5a2c7e376dd621080466e2cc928f88c2991367a397e6aafa1a581c8ff05ab4cb" gracePeriod=30 Nov 26 08:38:36 crc kubenswrapper[4909]: I1126 08:38:36.059685 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-844fcddd89-784bm" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" containerID="cri-o://5b8a522d3b3f036f06658f214ad156b9b7341fa4b260dc7d3f29b305baa5f098" gracePeriod=30 Nov 26 08:38:36 crc kubenswrapper[4909]: I1126 08:38:36.950985 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nttlz" event={"ID":"55e4e5e9-e628-4196-99c3-d882790cf706","Type":"ContainerStarted","Data":"56a3f68b2b0d31727edf8235e6a3e0db4c64981bf1b59864317b0df6ac6df6c3"} Nov 26 08:38:36 crc kubenswrapper[4909]: I1126 08:38:36.966309 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-nttlz" podStartSLOduration=1.828193739 podStartE2EDuration="8.966293883s" podCreationTimestamp="2025-11-26 08:38:28 +0000 UTC" firstStartedPulling="2025-11-26 08:38:29.217763952 +0000 UTC m=+5881.363975118" lastFinishedPulling="2025-11-26 08:38:36.355864096 +0000 UTC m=+5888.502075262" observedRunningTime="2025-11-26 08:38:36.963941839 +0000 UTC m=+5889.110152995" watchObservedRunningTime="2025-11-26 08:38:36.966293883 +0000 UTC m=+5889.112505049" Nov 26 08:38:38 crc kubenswrapper[4909]: I1126 08:38:38.973930 4909 generic.go:334] "Generic (PLEG): container finished" podID="55e4e5e9-e628-4196-99c3-d882790cf706" containerID="56a3f68b2b0d31727edf8235e6a3e0db4c64981bf1b59864317b0df6ac6df6c3" exitCode=0 Nov 26 08:38:38 crc kubenswrapper[4909]: I1126 08:38:38.974050 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nttlz" event={"ID":"55e4e5e9-e628-4196-99c3-d882790cf706","Type":"ContainerDied","Data":"56a3f68b2b0d31727edf8235e6a3e0db4c64981bf1b59864317b0df6ac6df6c3"} Nov 26 08:38:39 crc kubenswrapper[4909]: I1126 08:38:39.675300 4909 scope.go:117] "RemoveContainer" containerID="3828f0117bdac39cc672babf5bfaa4b6ec002526082fec083694ec9dd90c28ba" Nov 26 08:38:39 crc kubenswrapper[4909]: I1126 08:38:39.705718 4909 scope.go:117] "RemoveContainer" containerID="8b4dd1d812494a82df935bf46e840b046bf1432bee728c6b29767c769a344f59" Nov 26 08:38:39 crc kubenswrapper[4909]: I1126 08:38:39.801865 4909 scope.go:117] "RemoveContainer" containerID="12bfc7c0a0669733352f097155500512db2747ed77dec1c4ad9d293a65242816" Nov 26 08:38:39 crc kubenswrapper[4909]: I1126 08:38:39.836819 4909 scope.go:117] "RemoveContainer" containerID="6e6d19ceba050ea62d01d987a8b92bedd8dd8ab21c1fa542e5bb0fc0b14fb9fc" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.007791 4909 generic.go:334] "Generic (PLEG): container finished" podID="f60c3670-b331-4694-a090-36bea1a48307" containerID="5b8a522d3b3f036f06658f214ad156b9b7341fa4b260dc7d3f29b305baa5f098" exitCode=0 Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.007888 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844fcddd89-784bm" event={"ID":"f60c3670-b331-4694-a090-36bea1a48307","Type":"ContainerDied","Data":"5b8a522d3b3f036f06658f214ad156b9b7341fa4b260dc7d3f29b305baa5f098"} Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.033365 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d8fd-account-create-2rknr"] Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.044083 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d8fd-account-create-2rknr"] Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.116802 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqdth" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="registry-server" probeResult="failure" output=< Nov 26 08:38:40 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 08:38:40 crc kubenswrapper[4909]: > Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.354323 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.515075 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27055241-fb7f-439f-a0ce-54243ce3d2eb" path="/var/lib/kubelet/pods/27055241-fb7f-439f-a0ce-54243ce3d2eb/volumes" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.524511 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-combined-ca-bundle\") pod \"55e4e5e9-e628-4196-99c3-d882790cf706\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.524577 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-config-data\") pod \"55e4e5e9-e628-4196-99c3-d882790cf706\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.524696 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zz7d\" (UniqueName: \"kubernetes.io/projected/55e4e5e9-e628-4196-99c3-d882790cf706-kube-api-access-4zz7d\") pod \"55e4e5e9-e628-4196-99c3-d882790cf706\" (UID: \"55e4e5e9-e628-4196-99c3-d882790cf706\") " Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.531765 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e4e5e9-e628-4196-99c3-d882790cf706-kube-api-access-4zz7d" (OuterVolumeSpecName: "kube-api-access-4zz7d") pod "55e4e5e9-e628-4196-99c3-d882790cf706" (UID: "55e4e5e9-e628-4196-99c3-d882790cf706"). InnerVolumeSpecName "kube-api-access-4zz7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.557139 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55e4e5e9-e628-4196-99c3-d882790cf706" (UID: "55e4e5e9-e628-4196-99c3-d882790cf706"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.602047 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-config-data" (OuterVolumeSpecName: "config-data") pod "55e4e5e9-e628-4196-99c3-d882790cf706" (UID: "55e4e5e9-e628-4196-99c3-d882790cf706"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.627101 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.627171 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e4e5e9-e628-4196-99c3-d882790cf706-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:40 crc kubenswrapper[4909]: I1126 08:38:40.627181 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zz7d\" (UniqueName: \"kubernetes.io/projected/55e4e5e9-e628-4196-99c3-d882790cf706-kube-api-access-4zz7d\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:41 crc kubenswrapper[4909]: I1126 08:38:41.025283 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nttlz" event={"ID":"55e4e5e9-e628-4196-99c3-d882790cf706","Type":"ContainerDied","Data":"a81fed809244079adc97e60eeffe1cf3f4647782599f432b4cb6c96aafb78e25"} Nov 26 08:38:41 crc kubenswrapper[4909]: I1126 08:38:41.025634 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a81fed809244079adc97e60eeffe1cf3f4647782599f432b4cb6c96aafb78e25" Nov 26 08:38:41 crc kubenswrapper[4909]: I1126 08:38:41.025367 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nttlz" Nov 26 08:38:41 crc kubenswrapper[4909]: I1126 08:38:41.499484 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:38:41 crc kubenswrapper[4909]: E1126 08:38:41.499932 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.029172 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7f8d6cd4db-jq8c7"] Nov 26 08:38:42 crc kubenswrapper[4909]: E1126 08:38:42.029885 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon-log" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.029899 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon-log" Nov 26 08:38:42 crc kubenswrapper[4909]: E1126 08:38:42.029920 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e4e5e9-e628-4196-99c3-d882790cf706" containerName="heat-db-sync" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.029926 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e4e5e9-e628-4196-99c3-d882790cf706" containerName="heat-db-sync" Nov 26 08:38:42 crc kubenswrapper[4909]: E1126 08:38:42.029944 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.029950 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.030141 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e4e5e9-e628-4196-99c3-d882790cf706" containerName="heat-db-sync" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.030167 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon-log" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.030188 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="363f2f18-8a03-44f4-b56e-68325a86247f" containerName="horizon" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.030998 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.033071 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-njgqp" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.033239 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.033376 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.051684 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f8d6cd4db-jq8c7"] Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.053696 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-combined-ca-bundle\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.053804 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-config-data\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.053868 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-config-data-custom\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.053902 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzf4c\" (UniqueName: \"kubernetes.io/projected/d230c25a-c148-4549-9d86-60b46e6e5145-kube-api-access-fzf4c\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.156063 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-combined-ca-bundle\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.156170 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-config-data\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.156228 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-config-data-custom\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.156262 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzf4c\" (UniqueName: \"kubernetes.io/projected/d230c25a-c148-4549-9d86-60b46e6e5145-kube-api-access-fzf4c\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.161238 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-combined-ca-bundle\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.164482 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-config-data-custom\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.171392 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d230c25a-c148-4549-9d86-60b46e6e5145-config-data\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.200092 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzf4c\" (UniqueName: \"kubernetes.io/projected/d230c25a-c148-4549-9d86-60b46e6e5145-kube-api-access-fzf4c\") pod \"heat-engine-7f8d6cd4db-jq8c7\" (UID: \"d230c25a-c148-4549-9d86-60b46e6e5145\") " pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.283880 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-ff4df84b7-q74lm"] Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.285111 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.288268 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.304800 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ff4df84b7-q74lm"] Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.358045 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.366349 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-combined-ca-bundle\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.366444 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-config-data\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.366557 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6c5\" (UniqueName: \"kubernetes.io/projected/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-kube-api-access-5g6c5\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.366664 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-config-data-custom\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.371401 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5bc58458dc-fkz9r"] Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.372852 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.375882 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.419653 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5bc58458dc-fkz9r"] Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468007 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6c5\" (UniqueName: \"kubernetes.io/projected/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-kube-api-access-5g6c5\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468068 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-config-data-custom\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468105 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-config-data-custom\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468134 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-combined-ca-bundle\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468160 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5nzr\" (UniqueName: \"kubernetes.io/projected/330f2a23-1b8e-4881-a458-e9d463c4383e-kube-api-access-w5nzr\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468222 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-combined-ca-bundle\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468264 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-config-data\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.468299 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-config-data\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.476532 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-combined-ca-bundle\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.476784 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-config-data\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.477444 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-config-data-custom\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.495061 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6c5\" (UniqueName: \"kubernetes.io/projected/f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4-kube-api-access-5g6c5\") pod \"heat-cfnapi-ff4df84b7-q74lm\" (UID: \"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4\") " pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.572918 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-config-data-custom\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.572974 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-combined-ca-bundle\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.573005 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5nzr\" (UniqueName: \"kubernetes.io/projected/330f2a23-1b8e-4881-a458-e9d463c4383e-kube-api-access-w5nzr\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.573129 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-config-data\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.579324 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-config-data-custom\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.580871 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-config-data\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.582127 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330f2a23-1b8e-4881-a458-e9d463c4383e-combined-ca-bundle\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.602679 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5nzr\" (UniqueName: \"kubernetes.io/projected/330f2a23-1b8e-4881-a458-e9d463c4383e-kube-api-access-w5nzr\") pod \"heat-api-5bc58458dc-fkz9r\" (UID: \"330f2a23-1b8e-4881-a458-e9d463c4383e\") " pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.625137 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.839092 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:42 crc kubenswrapper[4909]: I1126 08:38:42.954622 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f8d6cd4db-jq8c7"] Nov 26 08:38:43 crc kubenswrapper[4909]: I1126 08:38:43.062772 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f8d6cd4db-jq8c7" event={"ID":"d230c25a-c148-4549-9d86-60b46e6e5145","Type":"ContainerStarted","Data":"9c3336c0c4459e6c540c45e8d259eebf288f66524376d3ab84fa5541bdff1ca5"} Nov 26 08:38:43 crc kubenswrapper[4909]: I1126 08:38:43.213106 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ff4df84b7-q74lm"] Nov 26 08:38:43 crc kubenswrapper[4909]: I1126 08:38:43.352944 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5bc58458dc-fkz9r"] Nov 26 08:38:44 crc kubenswrapper[4909]: I1126 08:38:44.078584 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f8d6cd4db-jq8c7" event={"ID":"d230c25a-c148-4549-9d86-60b46e6e5145","Type":"ContainerStarted","Data":"357d5b538b214510994ec6bb3660718095137aa5484217771644641d5a69ff64"} Nov 26 08:38:44 crc kubenswrapper[4909]: I1126 08:38:44.078864 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:38:44 crc kubenswrapper[4909]: I1126 08:38:44.089952 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bc58458dc-fkz9r" event={"ID":"330f2a23-1b8e-4881-a458-e9d463c4383e","Type":"ContainerStarted","Data":"906d9cdbaa9692d05739fdfb031bc80577e047bd040f7eb143639caee30e929f"} Nov 26 08:38:44 crc kubenswrapper[4909]: I1126 08:38:44.091798 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ff4df84b7-q74lm" event={"ID":"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4","Type":"ContainerStarted","Data":"e360c2a82a8177a99231fadf28553d168d04f0ed4c96b04001016c0090a61fee"} Nov 26 08:38:44 crc kubenswrapper[4909]: I1126 08:38:44.096426 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7f8d6cd4db-jq8c7" podStartSLOduration=2.096415229 podStartE2EDuration="2.096415229s" podCreationTimestamp="2025-11-26 08:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:38:44.094435835 +0000 UTC m=+5896.240647001" watchObservedRunningTime="2025-11-26 08:38:44.096415229 +0000 UTC m=+5896.242626395" Nov 26 08:38:45 crc kubenswrapper[4909]: I1126 08:38:45.047580 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844fcddd89-784bm" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.125:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8080: connect: connection refused" Nov 26 08:38:46 crc kubenswrapper[4909]: I1126 08:38:46.111743 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bc58458dc-fkz9r" event={"ID":"330f2a23-1b8e-4881-a458-e9d463c4383e","Type":"ContainerStarted","Data":"11f0fe039a8e9bda992c324f0a071830fcdaa7d44b917b08d303656ffad3ff38"} Nov 26 08:38:46 crc kubenswrapper[4909]: I1126 08:38:46.112106 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:46 crc kubenswrapper[4909]: I1126 08:38:46.113177 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ff4df84b7-q74lm" event={"ID":"f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4","Type":"ContainerStarted","Data":"302eeed8fb702b9856863db3ac73b1e3d04f2f9a65761052116fcbac281d88a8"} Nov 26 08:38:46 crc kubenswrapper[4909]: I1126 08:38:46.113293 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:46 crc kubenswrapper[4909]: I1126 08:38:46.139315 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5bc58458dc-fkz9r" podStartSLOduration=2.120728004 podStartE2EDuration="4.139293929s" podCreationTimestamp="2025-11-26 08:38:42 +0000 UTC" firstStartedPulling="2025-11-26 08:38:43.370486818 +0000 UTC m=+5895.516697994" lastFinishedPulling="2025-11-26 08:38:45.389052753 +0000 UTC m=+5897.535263919" observedRunningTime="2025-11-26 08:38:46.136872602 +0000 UTC m=+5898.283083788" watchObservedRunningTime="2025-11-26 08:38:46.139293929 +0000 UTC m=+5898.285505095" Nov 26 08:38:46 crc kubenswrapper[4909]: I1126 08:38:46.165908 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-ff4df84b7-q74lm" podStartSLOduration=2.003277436 podStartE2EDuration="4.165887965s" podCreationTimestamp="2025-11-26 08:38:42 +0000 UTC" firstStartedPulling="2025-11-26 08:38:43.223104433 +0000 UTC m=+5895.369315599" lastFinishedPulling="2025-11-26 08:38:45.385714962 +0000 UTC m=+5897.531926128" observedRunningTime="2025-11-26 08:38:46.161731481 +0000 UTC m=+5898.307942647" watchObservedRunningTime="2025-11-26 08:38:46.165887965 +0000 UTC m=+5898.312099131" Nov 26 08:38:47 crc kubenswrapper[4909]: I1126 08:38:47.046650 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jwjcz"] Nov 26 08:38:47 crc kubenswrapper[4909]: I1126 08:38:47.053665 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jwjcz"] Nov 26 08:38:48 crc kubenswrapper[4909]: I1126 08:38:48.510697 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72338267-9d00-4d71-b0fa-f1e2e5c42397" path="/var/lib/kubelet/pods/72338267-9d00-4d71-b0fa-f1e2e5c42397/volumes" Nov 26 08:38:49 crc kubenswrapper[4909]: I1126 08:38:49.118976 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:49 crc kubenswrapper[4909]: I1126 08:38:49.181447 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:49 crc kubenswrapper[4909]: I1126 08:38:49.909749 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqdth"] Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.159007 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqdth" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="registry-server" containerID="cri-o://9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c" gracePeriod=2 Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.751571 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.822266 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-utilities\") pod \"163ab6c1-dae7-4980-9104-08e32bee36aa\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.822627 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-catalog-content\") pod \"163ab6c1-dae7-4980-9104-08e32bee36aa\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.822750 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xck5\" (UniqueName: \"kubernetes.io/projected/163ab6c1-dae7-4980-9104-08e32bee36aa-kube-api-access-8xck5\") pod \"163ab6c1-dae7-4980-9104-08e32bee36aa\" (UID: \"163ab6c1-dae7-4980-9104-08e32bee36aa\") " Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.823223 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-utilities" (OuterVolumeSpecName: "utilities") pod "163ab6c1-dae7-4980-9104-08e32bee36aa" (UID: "163ab6c1-dae7-4980-9104-08e32bee36aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.823518 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.830501 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163ab6c1-dae7-4980-9104-08e32bee36aa-kube-api-access-8xck5" (OuterVolumeSpecName: "kube-api-access-8xck5") pod "163ab6c1-dae7-4980-9104-08e32bee36aa" (UID: "163ab6c1-dae7-4980-9104-08e32bee36aa"). InnerVolumeSpecName "kube-api-access-8xck5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.912069 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "163ab6c1-dae7-4980-9104-08e32bee36aa" (UID: "163ab6c1-dae7-4980-9104-08e32bee36aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.926631 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163ab6c1-dae7-4980-9104-08e32bee36aa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:50 crc kubenswrapper[4909]: I1126 08:38:50.926669 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xck5\" (UniqueName: \"kubernetes.io/projected/163ab6c1-dae7-4980-9104-08e32bee36aa-kube-api-access-8xck5\") on node \"crc\" DevicePath \"\"" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.172134 4909 generic.go:334] "Generic (PLEG): container finished" podID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerID="9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c" exitCode=0 Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.172184 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqdth" event={"ID":"163ab6c1-dae7-4980-9104-08e32bee36aa","Type":"ContainerDied","Data":"9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c"} Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.172218 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqdth" event={"ID":"163ab6c1-dae7-4980-9104-08e32bee36aa","Type":"ContainerDied","Data":"e36ce71031fb195016fd3ba33b33be7355ab0b6528afef5feb4b663ee9ad991d"} Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.172234 4909 scope.go:117] "RemoveContainer" containerID="9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.172241 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqdth" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.205323 4909 scope.go:117] "RemoveContainer" containerID="a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.228382 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqdth"] Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.237125 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqdth"] Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.247471 4909 scope.go:117] "RemoveContainer" containerID="ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.277372 4909 scope.go:117] "RemoveContainer" containerID="9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c" Nov 26 08:38:51 crc kubenswrapper[4909]: E1126 08:38:51.277938 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c\": container with ID starting with 9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c not found: ID does not exist" containerID="9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.277983 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c"} err="failed to get container status \"9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c\": rpc error: code = NotFound desc = could not find container \"9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c\": container with ID starting with 9c064b5a59cf208c5907674a6c3c471e09c44fa7df65b02358378d5d0567883c not found: ID does not exist" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.278010 4909 scope.go:117] "RemoveContainer" containerID="a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b" Nov 26 08:38:51 crc kubenswrapper[4909]: E1126 08:38:51.279293 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b\": container with ID starting with a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b not found: ID does not exist" containerID="a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.279329 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b"} err="failed to get container status \"a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b\": rpc error: code = NotFound desc = could not find container \"a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b\": container with ID starting with a9f4fa30f18ba22662a254bd8cb0ff2f28718fde9acf5bcef07f319772a6422b not found: ID does not exist" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.279349 4909 scope.go:117] "RemoveContainer" containerID="ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d" Nov 26 08:38:51 crc kubenswrapper[4909]: E1126 08:38:51.279763 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d\": container with ID starting with ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d not found: ID does not exist" containerID="ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d" Nov 26 08:38:51 crc kubenswrapper[4909]: I1126 08:38:51.279811 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d"} err="failed to get container status \"ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d\": rpc error: code = NotFound desc = could not find container \"ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d\": container with ID starting with ca0e33201d3f749ea8c6f4eec6726481f164cc4a66c0a032fdcd7ea04e6f844d not found: ID does not exist" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.517940 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" path="/var/lib/kubelet/pods/163ab6c1-dae7-4980-9104-08e32bee36aa/volumes" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.929474 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r82ld"] Nov 26 08:38:52 crc kubenswrapper[4909]: E1126 08:38:52.930515 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="extract-content" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.930550 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="extract-content" Nov 26 08:38:52 crc kubenswrapper[4909]: E1126 08:38:52.930573 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="extract-utilities" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.930621 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="extract-utilities" Nov 26 08:38:52 crc kubenswrapper[4909]: E1126 08:38:52.930690 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="registry-server" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.930705 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="registry-server" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.931176 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="163ab6c1-dae7-4980-9104-08e32bee36aa" containerName="registry-server" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.941965 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.976508 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4mxr\" (UniqueName: \"kubernetes.io/projected/662d4438-0b22-4382-ba6e-fc486bbf63f3-kube-api-access-t4mxr\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.976553 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-utilities\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.976664 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-catalog-content\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:52 crc kubenswrapper[4909]: I1126 08:38:52.986120 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r82ld"] Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.078562 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-catalog-content\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.078754 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4mxr\" (UniqueName: \"kubernetes.io/projected/662d4438-0b22-4382-ba6e-fc486bbf63f3-kube-api-access-t4mxr\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.078773 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-utilities\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.079074 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-catalog-content\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.079158 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-utilities\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.100337 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4mxr\" (UniqueName: \"kubernetes.io/projected/662d4438-0b22-4382-ba6e-fc486bbf63f3-kube-api-access-t4mxr\") pod \"certified-operators-r82ld\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.277569 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:38:53 crc kubenswrapper[4909]: I1126 08:38:53.865351 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r82ld"] Nov 26 08:38:54 crc kubenswrapper[4909]: I1126 08:38:54.173395 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-ff4df84b7-q74lm" Nov 26 08:38:54 crc kubenswrapper[4909]: I1126 08:38:54.211031 4909 generic.go:334] "Generic (PLEG): container finished" podID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerID="6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9" exitCode=0 Nov 26 08:38:54 crc kubenswrapper[4909]: I1126 08:38:54.211079 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r82ld" event={"ID":"662d4438-0b22-4382-ba6e-fc486bbf63f3","Type":"ContainerDied","Data":"6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9"} Nov 26 08:38:54 crc kubenswrapper[4909]: I1126 08:38:54.211107 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r82ld" event={"ID":"662d4438-0b22-4382-ba6e-fc486bbf63f3","Type":"ContainerStarted","Data":"86c4a106a101293df2eb7a1e3c2bb58a7a7860677a15953d234c6ddb30935eef"} Nov 26 08:38:54 crc kubenswrapper[4909]: I1126 08:38:54.341283 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5bc58458dc-fkz9r" Nov 26 08:38:54 crc kubenswrapper[4909]: I1126 08:38:54.507736 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:38:54 crc kubenswrapper[4909]: E1126 08:38:54.508287 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:38:55 crc kubenswrapper[4909]: I1126 08:38:55.047749 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844fcddd89-784bm" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.125:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8080: connect: connection refused" Nov 26 08:38:55 crc kubenswrapper[4909]: I1126 08:38:55.222820 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r82ld" event={"ID":"662d4438-0b22-4382-ba6e-fc486bbf63f3","Type":"ContainerStarted","Data":"28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442"} Nov 26 08:38:56 crc kubenswrapper[4909]: I1126 08:38:56.237411 4909 generic.go:334] "Generic (PLEG): container finished" podID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerID="28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442" exitCode=0 Nov 26 08:38:56 crc kubenswrapper[4909]: I1126 08:38:56.237497 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r82ld" event={"ID":"662d4438-0b22-4382-ba6e-fc486bbf63f3","Type":"ContainerDied","Data":"28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442"} Nov 26 08:38:57 crc kubenswrapper[4909]: I1126 08:38:57.252193 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r82ld" event={"ID":"662d4438-0b22-4382-ba6e-fc486bbf63f3","Type":"ContainerStarted","Data":"e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80"} Nov 26 08:38:57 crc kubenswrapper[4909]: I1126 08:38:57.297575 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r82ld" podStartSLOduration=2.853372433 podStartE2EDuration="5.297546201s" podCreationTimestamp="2025-11-26 08:38:52 +0000 UTC" firstStartedPulling="2025-11-26 08:38:54.213645396 +0000 UTC m=+5906.359856602" lastFinishedPulling="2025-11-26 08:38:56.657819184 +0000 UTC m=+5908.804030370" observedRunningTime="2025-11-26 08:38:57.284367012 +0000 UTC m=+5909.430578218" watchObservedRunningTime="2025-11-26 08:38:57.297546201 +0000 UTC m=+5909.443757407" Nov 26 08:39:02 crc kubenswrapper[4909]: I1126 08:39:02.412221 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7f8d6cd4db-jq8c7" Nov 26 08:39:03 crc kubenswrapper[4909]: I1126 08:39:03.278351 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:39:03 crc kubenswrapper[4909]: I1126 08:39:03.278669 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:39:03 crc kubenswrapper[4909]: I1126 08:39:03.334154 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:39:03 crc kubenswrapper[4909]: I1126 08:39:03.399211 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:39:03 crc kubenswrapper[4909]: I1126 08:39:03.572962 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r82ld"] Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.047931 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844fcddd89-784bm" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.125:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8080: connect: connection refused" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.048071 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.330895 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r82ld" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="registry-server" containerID="cri-o://e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80" gracePeriod=2 Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.854115 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.891076 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-catalog-content\") pod \"662d4438-0b22-4382-ba6e-fc486bbf63f3\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.891260 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-utilities\") pod \"662d4438-0b22-4382-ba6e-fc486bbf63f3\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.891448 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4mxr\" (UniqueName: \"kubernetes.io/projected/662d4438-0b22-4382-ba6e-fc486bbf63f3-kube-api-access-t4mxr\") pod \"662d4438-0b22-4382-ba6e-fc486bbf63f3\" (UID: \"662d4438-0b22-4382-ba6e-fc486bbf63f3\") " Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.892152 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-utilities" (OuterVolumeSpecName: "utilities") pod "662d4438-0b22-4382-ba6e-fc486bbf63f3" (UID: "662d4438-0b22-4382-ba6e-fc486bbf63f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.900170 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662d4438-0b22-4382-ba6e-fc486bbf63f3-kube-api-access-t4mxr" (OuterVolumeSpecName: "kube-api-access-t4mxr") pod "662d4438-0b22-4382-ba6e-fc486bbf63f3" (UID: "662d4438-0b22-4382-ba6e-fc486bbf63f3"). InnerVolumeSpecName "kube-api-access-t4mxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.932526 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "662d4438-0b22-4382-ba6e-fc486bbf63f3" (UID: "662d4438-0b22-4382-ba6e-fc486bbf63f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.993726 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4mxr\" (UniqueName: \"kubernetes.io/projected/662d4438-0b22-4382-ba6e-fc486bbf63f3-kube-api-access-t4mxr\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.993762 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:05 crc kubenswrapper[4909]: I1126 08:39:05.993778 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d4438-0b22-4382-ba6e-fc486bbf63f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.343526 4909 generic.go:334] "Generic (PLEG): container finished" podID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerID="e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80" exitCode=0 Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.343577 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r82ld" event={"ID":"662d4438-0b22-4382-ba6e-fc486bbf63f3","Type":"ContainerDied","Data":"e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80"} Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.343653 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r82ld" event={"ID":"662d4438-0b22-4382-ba6e-fc486bbf63f3","Type":"ContainerDied","Data":"86c4a106a101293df2eb7a1e3c2bb58a7a7860677a15953d234c6ddb30935eef"} Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.343656 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r82ld" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.343682 4909 scope.go:117] "RemoveContainer" containerID="e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.354053 4909 generic.go:334] "Generic (PLEG): container finished" podID="f60c3670-b331-4694-a090-36bea1a48307" containerID="5a2c7e376dd621080466e2cc928f88c2991367a397e6aafa1a581c8ff05ab4cb" exitCode=137 Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.354095 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844fcddd89-784bm" event={"ID":"f60c3670-b331-4694-a090-36bea1a48307","Type":"ContainerDied","Data":"5a2c7e376dd621080466e2cc928f88c2991367a397e6aafa1a581c8ff05ab4cb"} Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.406667 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r82ld"] Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.416903 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r82ld"] Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.420260 4909 scope.go:117] "RemoveContainer" containerID="28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.462438 4909 scope.go:117] "RemoveContainer" containerID="6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.516308 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" path="/var/lib/kubelet/pods/662d4438-0b22-4382-ba6e-fc486bbf63f3/volumes" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.566332 4909 scope.go:117] "RemoveContainer" containerID="e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80" Nov 26 08:39:06 crc kubenswrapper[4909]: E1126 08:39:06.566869 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80\": container with ID starting with e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80 not found: ID does not exist" containerID="e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.566905 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80"} err="failed to get container status \"e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80\": rpc error: code = NotFound desc = could not find container \"e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80\": container with ID starting with e4d4d14755257afd05a83bdea01a4ab70fd964e0f91a7a7ac5c75ac498484a80 not found: ID does not exist" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.566933 4909 scope.go:117] "RemoveContainer" containerID="28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442" Nov 26 08:39:06 crc kubenswrapper[4909]: E1126 08:39:06.567283 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442\": container with ID starting with 28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442 not found: ID does not exist" containerID="28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.567312 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442"} err="failed to get container status \"28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442\": rpc error: code = NotFound desc = could not find container \"28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442\": container with ID starting with 28ab8d7509eaf1ee83edb5af84db2a542bfada19ae0c6697069c1616a6bb3442 not found: ID does not exist" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.567328 4909 scope.go:117] "RemoveContainer" containerID="6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9" Nov 26 08:39:06 crc kubenswrapper[4909]: E1126 08:39:06.567653 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9\": container with ID starting with 6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9 not found: ID does not exist" containerID="6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.567682 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9"} err="failed to get container status \"6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9\": rpc error: code = NotFound desc = could not find container \"6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9\": container with ID starting with 6a679da4154299f279eaa3c32b4bd983d6f2b415334f4180184ccc78c11b61d9 not found: ID does not exist" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.605185 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.718420 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f60c3670-b331-4694-a090-36bea1a48307-horizon-secret-key\") pod \"f60c3670-b331-4694-a090-36bea1a48307\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.718538 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-config-data\") pod \"f60c3670-b331-4694-a090-36bea1a48307\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.718632 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60c3670-b331-4694-a090-36bea1a48307-logs\") pod \"f60c3670-b331-4694-a090-36bea1a48307\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.718674 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxds\" (UniqueName: \"kubernetes.io/projected/f60c3670-b331-4694-a090-36bea1a48307-kube-api-access-xsxds\") pod \"f60c3670-b331-4694-a090-36bea1a48307\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.718773 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-scripts\") pod \"f60c3670-b331-4694-a090-36bea1a48307\" (UID: \"f60c3670-b331-4694-a090-36bea1a48307\") " Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.720236 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60c3670-b331-4694-a090-36bea1a48307-logs" (OuterVolumeSpecName: "logs") pod "f60c3670-b331-4694-a090-36bea1a48307" (UID: "f60c3670-b331-4694-a090-36bea1a48307"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.726885 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60c3670-b331-4694-a090-36bea1a48307-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f60c3670-b331-4694-a090-36bea1a48307" (UID: "f60c3670-b331-4694-a090-36bea1a48307"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.726949 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60c3670-b331-4694-a090-36bea1a48307-kube-api-access-xsxds" (OuterVolumeSpecName: "kube-api-access-xsxds") pod "f60c3670-b331-4694-a090-36bea1a48307" (UID: "f60c3670-b331-4694-a090-36bea1a48307"). InnerVolumeSpecName "kube-api-access-xsxds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.744956 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-config-data" (OuterVolumeSpecName: "config-data") pod "f60c3670-b331-4694-a090-36bea1a48307" (UID: "f60c3670-b331-4694-a090-36bea1a48307"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.747859 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-scripts" (OuterVolumeSpecName: "scripts") pod "f60c3670-b331-4694-a090-36bea1a48307" (UID: "f60c3670-b331-4694-a090-36bea1a48307"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.821440 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.821811 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60c3670-b331-4694-a090-36bea1a48307-logs\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.821929 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsxds\" (UniqueName: \"kubernetes.io/projected/f60c3670-b331-4694-a090-36bea1a48307-kube-api-access-xsxds\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.822049 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c3670-b331-4694-a090-36bea1a48307-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:06 crc kubenswrapper[4909]: I1126 08:39:06.822154 4909 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f60c3670-b331-4694-a090-36bea1a48307-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:07 crc kubenswrapper[4909]: I1126 08:39:07.372207 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844fcddd89-784bm" event={"ID":"f60c3670-b331-4694-a090-36bea1a48307","Type":"ContainerDied","Data":"840cef685df8130e95e5850193993750d4d38e2a9b37083cf174258740b2177f"} Nov 26 08:39:07 crc kubenswrapper[4909]: I1126 08:39:07.372258 4909 scope.go:117] "RemoveContainer" containerID="5b8a522d3b3f036f06658f214ad156b9b7341fa4b260dc7d3f29b305baa5f098" Nov 26 08:39:07 crc kubenswrapper[4909]: I1126 08:39:07.372261 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844fcddd89-784bm" Nov 26 08:39:07 crc kubenswrapper[4909]: I1126 08:39:07.439744 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-844fcddd89-784bm"] Nov 26 08:39:07 crc kubenswrapper[4909]: I1126 08:39:07.452171 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-844fcddd89-784bm"] Nov 26 08:39:07 crc kubenswrapper[4909]: I1126 08:39:07.619148 4909 scope.go:117] "RemoveContainer" containerID="5a2c7e376dd621080466e2cc928f88c2991367a397e6aafa1a581c8ff05ab4cb" Nov 26 08:39:08 crc kubenswrapper[4909]: I1126 08:39:08.508935 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60c3670-b331-4694-a090-36bea1a48307" path="/var/lib/kubelet/pods/f60c3670-b331-4694-a090-36bea1a48307/volumes" Nov 26 08:39:09 crc kubenswrapper[4909]: I1126 08:39:09.499651 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:39:09 crc kubenswrapper[4909]: E1126 08:39:09.500165 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.435341 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h"] Nov 26 08:39:11 crc kubenswrapper[4909]: E1126 08:39:11.436186 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436204 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" Nov 26 08:39:11 crc kubenswrapper[4909]: E1126 08:39:11.436226 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="extract-content" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436234 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="extract-content" Nov 26 08:39:11 crc kubenswrapper[4909]: E1126 08:39:11.436250 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="registry-server" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436258 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="registry-server" Nov 26 08:39:11 crc kubenswrapper[4909]: E1126 08:39:11.436278 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="extract-utilities" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436286 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="extract-utilities" Nov 26 08:39:11 crc kubenswrapper[4909]: E1126 08:39:11.436307 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon-log" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436314 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon-log" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436575 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="662d4438-0b22-4382-ba6e-fc486bbf63f3" containerName="registry-server" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436621 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon-log" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.436635 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60c3670-b331-4694-a090-36bea1a48307" containerName="horizon" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.438501 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.444484 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h"] Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.446857 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.629189 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8zq\" (UniqueName: \"kubernetes.io/projected/dc198e11-71c6-418c-828e-908f9ff0243d-kube-api-access-fv8zq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.629291 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.629322 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.731774 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.731983 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8zq\" (UniqueName: \"kubernetes.io/projected/dc198e11-71c6-418c-828e-908f9ff0243d-kube-api-access-fv8zq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.732130 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.732614 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.732896 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.761801 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8zq\" (UniqueName: \"kubernetes.io/projected/dc198e11-71c6-418c-828e-908f9ff0243d-kube-api-access-fv8zq\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:11 crc kubenswrapper[4909]: I1126 08:39:11.767455 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:12 crc kubenswrapper[4909]: W1126 08:39:12.269207 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc198e11_71c6_418c_828e_908f9ff0243d.slice/crio-611304e401d9726f1c721cb9f4c7b63edf3f106b65271772799b4b801b78c0cd WatchSource:0}: Error finding container 611304e401d9726f1c721cb9f4c7b63edf3f106b65271772799b4b801b78c0cd: Status 404 returned error can't find the container with id 611304e401d9726f1c721cb9f4c7b63edf3f106b65271772799b4b801b78c0cd Nov 26 08:39:12 crc kubenswrapper[4909]: I1126 08:39:12.281412 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h"] Nov 26 08:39:12 crc kubenswrapper[4909]: I1126 08:39:12.444078 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" event={"ID":"dc198e11-71c6-418c-828e-908f9ff0243d","Type":"ContainerStarted","Data":"611304e401d9726f1c721cb9f4c7b63edf3f106b65271772799b4b801b78c0cd"} Nov 26 08:39:13 crc kubenswrapper[4909]: I1126 08:39:13.456913 4909 generic.go:334] "Generic (PLEG): container finished" podID="dc198e11-71c6-418c-828e-908f9ff0243d" containerID="ea49d85d3ba2aafbd032820cdf06dc393a9219204e6e512d09e1db3719c5a1f8" exitCode=0 Nov 26 08:39:13 crc kubenswrapper[4909]: I1126 08:39:13.457028 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" event={"ID":"dc198e11-71c6-418c-828e-908f9ff0243d","Type":"ContainerDied","Data":"ea49d85d3ba2aafbd032820cdf06dc393a9219204e6e512d09e1db3719c5a1f8"} Nov 26 08:39:16 crc kubenswrapper[4909]: I1126 08:39:16.506706 4909 generic.go:334] "Generic (PLEG): container finished" podID="dc198e11-71c6-418c-828e-908f9ff0243d" containerID="4a744f97ed48bdd020ebb85987924a26f166aa57956d598b5ccb4ef67acd5027" exitCode=0 Nov 26 08:39:16 crc kubenswrapper[4909]: I1126 08:39:16.525823 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" event={"ID":"dc198e11-71c6-418c-828e-908f9ff0243d","Type":"ContainerDied","Data":"4a744f97ed48bdd020ebb85987924a26f166aa57956d598b5ccb4ef67acd5027"} Nov 26 08:39:17 crc kubenswrapper[4909]: I1126 08:39:17.521865 4909 generic.go:334] "Generic (PLEG): container finished" podID="dc198e11-71c6-418c-828e-908f9ff0243d" containerID="1d3eb6d8943f84668a90a3df12f1aac8c2a6bb5ac1cfb36367fd28229a0c7e0a" exitCode=0 Nov 26 08:39:17 crc kubenswrapper[4909]: I1126 08:39:17.522641 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" event={"ID":"dc198e11-71c6-418c-828e-908f9ff0243d","Type":"ContainerDied","Data":"1d3eb6d8943f84668a90a3df12f1aac8c2a6bb5ac1cfb36367fd28229a0c7e0a"} Nov 26 08:39:18 crc kubenswrapper[4909]: I1126 08:39:18.918506 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.091683 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv8zq\" (UniqueName: \"kubernetes.io/projected/dc198e11-71c6-418c-828e-908f9ff0243d-kube-api-access-fv8zq\") pod \"dc198e11-71c6-418c-828e-908f9ff0243d\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.091794 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-util\") pod \"dc198e11-71c6-418c-828e-908f9ff0243d\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.091868 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-bundle\") pod \"dc198e11-71c6-418c-828e-908f9ff0243d\" (UID: \"dc198e11-71c6-418c-828e-908f9ff0243d\") " Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.095839 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-bundle" (OuterVolumeSpecName: "bundle") pod "dc198e11-71c6-418c-828e-908f9ff0243d" (UID: "dc198e11-71c6-418c-828e-908f9ff0243d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.097816 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc198e11-71c6-418c-828e-908f9ff0243d-kube-api-access-fv8zq" (OuterVolumeSpecName: "kube-api-access-fv8zq") pod "dc198e11-71c6-418c-828e-908f9ff0243d" (UID: "dc198e11-71c6-418c-828e-908f9ff0243d"). InnerVolumeSpecName "kube-api-access-fv8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.102459 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-util" (OuterVolumeSpecName: "util") pod "dc198e11-71c6-418c-828e-908f9ff0243d" (UID: "dc198e11-71c6-418c-828e-908f9ff0243d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.193878 4909 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-util\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.193915 4909 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc198e11-71c6-418c-828e-908f9ff0243d-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.193929 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv8zq\" (UniqueName: \"kubernetes.io/projected/dc198e11-71c6-418c-828e-908f9ff0243d-kube-api-access-fv8zq\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.551573 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" event={"ID":"dc198e11-71c6-418c-828e-908f9ff0243d","Type":"ContainerDied","Data":"611304e401d9726f1c721cb9f4c7b63edf3f106b65271772799b4b801b78c0cd"} Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.551643 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h" Nov 26 08:39:19 crc kubenswrapper[4909]: I1126 08:39:19.551710 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611304e401d9726f1c721cb9f4c7b63edf3f106b65271772799b4b801b78c0cd" Nov 26 08:39:22 crc kubenswrapper[4909]: I1126 08:39:22.499865 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:39:22 crc kubenswrapper[4909]: E1126 08:39:22.500668 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.175308 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6"] Nov 26 08:39:29 crc kubenswrapper[4909]: E1126 08:39:29.176389 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc198e11-71c6-418c-828e-908f9ff0243d" containerName="pull" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.176405 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc198e11-71c6-418c-828e-908f9ff0243d" containerName="pull" Nov 26 08:39:29 crc kubenswrapper[4909]: E1126 08:39:29.176439 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc198e11-71c6-418c-828e-908f9ff0243d" containerName="util" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.176448 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc198e11-71c6-418c-828e-908f9ff0243d" containerName="util" Nov 26 08:39:29 crc kubenswrapper[4909]: E1126 08:39:29.176482 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc198e11-71c6-418c-828e-908f9ff0243d" containerName="extract" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.176491 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc198e11-71c6-418c-828e-908f9ff0243d" containerName="extract" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.194481 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc198e11-71c6-418c-828e-908f9ff0243d" containerName="extract" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.198969 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.199105 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.219139 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qn7b7" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.219541 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.219726 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.314633 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nhq\" (UniqueName: \"kubernetes.io/projected/593f8066-ac54-40b8-a70d-4146a75a4615-kube-api-access-v5nhq\") pod \"obo-prometheus-operator-668cf9dfbb-mn7t6\" (UID: \"593f8066-ac54-40b8-a70d-4146a75a4615\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.314924 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.316208 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.320009 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-gzvzm" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.320180 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.420612 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e5bba19-5d8e-4164-bb73-71cd72bc3f47-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k\" (UID: \"8e5bba19-5d8e-4164-bb73-71cd72bc3f47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.420665 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nhq\" (UniqueName: \"kubernetes.io/projected/593f8066-ac54-40b8-a70d-4146a75a4615-kube-api-access-v5nhq\") pod \"obo-prometheus-operator-668cf9dfbb-mn7t6\" (UID: \"593f8066-ac54-40b8-a70d-4146a75a4615\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.420786 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e5bba19-5d8e-4164-bb73-71cd72bc3f47-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k\" (UID: \"8e5bba19-5d8e-4164-bb73-71cd72bc3f47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.427913 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.461299 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nhq\" (UniqueName: \"kubernetes.io/projected/593f8066-ac54-40b8-a70d-4146a75a4615-kube-api-access-v5nhq\") pod \"obo-prometheus-operator-668cf9dfbb-mn7t6\" (UID: \"593f8066-ac54-40b8-a70d-4146a75a4615\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.472977 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.474427 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.487248 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.514048 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6zzck"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.516010 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.518246 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9gdgr" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.518389 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.527369 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e5bba19-5d8e-4164-bb73-71cd72bc3f47-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k\" (UID: \"8e5bba19-5d8e-4164-bb73-71cd72bc3f47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.527455 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e5bba19-5d8e-4164-bb73-71cd72bc3f47-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k\" (UID: \"8e5bba19-5d8e-4164-bb73-71cd72bc3f47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.535641 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6zzck"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.535762 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e5bba19-5d8e-4164-bb73-71cd72bc3f47-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k\" (UID: \"8e5bba19-5d8e-4164-bb73-71cd72bc3f47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.560057 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.572212 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e5bba19-5d8e-4164-bb73-71cd72bc3f47-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k\" (UID: \"8e5bba19-5d8e-4164-bb73-71cd72bc3f47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.629745 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f\" (UID: \"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.629797 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f\" (UID: \"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.629833 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e5c3951-a95e-459a-99f6-3e405bb4d8f8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6zzck\" (UID: \"4e5c3951-a95e-459a-99f6-3e405bb4d8f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.629906 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmf8\" (UniqueName: \"kubernetes.io/projected/4e5c3951-a95e-459a-99f6-3e405bb4d8f8-kube-api-access-wzmf8\") pod \"observability-operator-d8bb48f5d-6zzck\" (UID: \"4e5c3951-a95e-459a-99f6-3e405bb4d8f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.634117 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-4sxd6"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.638922 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.645225 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-r4s94" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.666223 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-4sxd6"] Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.711030 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.733321 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmf8\" (UniqueName: \"kubernetes.io/projected/4e5c3951-a95e-459a-99f6-3e405bb4d8f8-kube-api-access-wzmf8\") pod \"observability-operator-d8bb48f5d-6zzck\" (UID: \"4e5c3951-a95e-459a-99f6-3e405bb4d8f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.733475 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f\" (UID: \"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.733497 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f\" (UID: \"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.733533 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e5c3951-a95e-459a-99f6-3e405bb4d8f8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6zzck\" (UID: \"4e5c3951-a95e-459a-99f6-3e405bb4d8f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.741417 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e5c3951-a95e-459a-99f6-3e405bb4d8f8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6zzck\" (UID: \"4e5c3951-a95e-459a-99f6-3e405bb4d8f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.743316 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f\" (UID: \"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.746986 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f\" (UID: \"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.758725 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmf8\" (UniqueName: \"kubernetes.io/projected/4e5c3951-a95e-459a-99f6-3e405bb4d8f8-kube-api-access-wzmf8\") pod \"observability-operator-d8bb48f5d-6zzck\" (UID: \"4e5c3951-a95e-459a-99f6-3e405bb4d8f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.835654 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz99f\" (UniqueName: \"kubernetes.io/projected/a697aca5-82ec-4422-9e17-7dfadbee7ab2-kube-api-access-qz99f\") pod \"perses-operator-5446b9c989-4sxd6\" (UID: \"a697aca5-82ec-4422-9e17-7dfadbee7ab2\") " pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.835952 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a697aca5-82ec-4422-9e17-7dfadbee7ab2-openshift-service-ca\") pod \"perses-operator-5446b9c989-4sxd6\" (UID: \"a697aca5-82ec-4422-9e17-7dfadbee7ab2\") " pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.933875 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.941200 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz99f\" (UniqueName: \"kubernetes.io/projected/a697aca5-82ec-4422-9e17-7dfadbee7ab2-kube-api-access-qz99f\") pod \"perses-operator-5446b9c989-4sxd6\" (UID: \"a697aca5-82ec-4422-9e17-7dfadbee7ab2\") " pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.941329 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a697aca5-82ec-4422-9e17-7dfadbee7ab2-openshift-service-ca\") pod \"perses-operator-5446b9c989-4sxd6\" (UID: \"a697aca5-82ec-4422-9e17-7dfadbee7ab2\") " pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.942497 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a697aca5-82ec-4422-9e17-7dfadbee7ab2-openshift-service-ca\") pod \"perses-operator-5446b9c989-4sxd6\" (UID: \"a697aca5-82ec-4422-9e17-7dfadbee7ab2\") " pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.955277 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:29 crc kubenswrapper[4909]: I1126 08:39:29.966389 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz99f\" (UniqueName: \"kubernetes.io/projected/a697aca5-82ec-4422-9e17-7dfadbee7ab2-kube-api-access-qz99f\") pod \"perses-operator-5446b9c989-4sxd6\" (UID: \"a697aca5-82ec-4422-9e17-7dfadbee7ab2\") " pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.115491 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.316327 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6"] Nov 26 08:39:30 crc kubenswrapper[4909]: W1126 08:39:30.333077 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e5bba19_5d8e_4164_bb73_71cd72bc3f47.slice/crio-56f50741a7ca3f9f07bfa50d77cb3ec691413a22fda87e907a365f3b3252d999 WatchSource:0}: Error finding container 56f50741a7ca3f9f07bfa50d77cb3ec691413a22fda87e907a365f3b3252d999: Status 404 returned error can't find the container with id 56f50741a7ca3f9f07bfa50d77cb3ec691413a22fda87e907a365f3b3252d999 Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.333700 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k"] Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.528702 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f"] Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.653252 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6zzck"] Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.663900 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" event={"ID":"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8","Type":"ContainerStarted","Data":"dd9e9375a6a1ba5dc1e575484c288aa207349bc72f85211de7750c49c3332b9f"} Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.664935 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" event={"ID":"8e5bba19-5d8e-4164-bb73-71cd72bc3f47","Type":"ContainerStarted","Data":"56f50741a7ca3f9f07bfa50d77cb3ec691413a22fda87e907a365f3b3252d999"} Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.665770 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" event={"ID":"593f8066-ac54-40b8-a70d-4146a75a4615","Type":"ContainerStarted","Data":"127f1eb465cb703f6f688d4d01907de48c96d313b3bc3640e2ddfa2e901889fa"} Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.666566 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" event={"ID":"4e5c3951-a95e-459a-99f6-3e405bb4d8f8","Type":"ContainerStarted","Data":"7bf0497e845aacf6aa3dcf35cd7dbcd1bb6b4a254c43acba8d786d7a595e61ba"} Nov 26 08:39:30 crc kubenswrapper[4909]: I1126 08:39:30.737395 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-4sxd6"] Nov 26 08:39:30 crc kubenswrapper[4909]: W1126 08:39:30.741523 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda697aca5_82ec_4422_9e17_7dfadbee7ab2.slice/crio-9cf39418b1f7c4e18f30ae27fddb4a749c100e0056ede047325fba34ded2af81 WatchSource:0}: Error finding container 9cf39418b1f7c4e18f30ae27fddb4a749c100e0056ede047325fba34ded2af81: Status 404 returned error can't find the container with id 9cf39418b1f7c4e18f30ae27fddb4a749c100e0056ede047325fba34ded2af81 Nov 26 08:39:31 crc kubenswrapper[4909]: I1126 08:39:31.679262 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-4sxd6" event={"ID":"a697aca5-82ec-4422-9e17-7dfadbee7ab2","Type":"ContainerStarted","Data":"9cf39418b1f7c4e18f30ae27fddb4a749c100e0056ede047325fba34ded2af81"} Nov 26 08:39:33 crc kubenswrapper[4909]: I1126 08:39:33.501132 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:39:33 crc kubenswrapper[4909]: E1126 08:39:33.501899 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.058694 4909 scope.go:117] "RemoveContainer" containerID="32f44e64e05d4f0e24753c975c3bcff90168387f26b6a0adaa13cc38b63288a2" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.167753 4909 scope.go:117] "RemoveContainer" containerID="d9bb9fe67506888b513f123dcd69894e6de7804ee8c76e8fe3fb5e595b0cb069" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.787124 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" event={"ID":"b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8","Type":"ContainerStarted","Data":"500b6f1b89058218bb2f9d4e49260f0e5d73580a4ec7d44bfee957fab6c6ec73"} Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.789054 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" event={"ID":"4e5c3951-a95e-459a-99f6-3e405bb4d8f8","Type":"ContainerStarted","Data":"e4eccaa6527dddd1d2bb1c96fd969e8fccd3e4ada6b16ac135fac23ccffc434e"} Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.789649 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.791342 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-4sxd6" event={"ID":"a697aca5-82ec-4422-9e17-7dfadbee7ab2","Type":"ContainerStarted","Data":"4e58823007f3829e149d1918c52b0407c6aff57e278bc827684efccfd620559b"} Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.791513 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.793303 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" event={"ID":"8e5bba19-5d8e-4164-bb73-71cd72bc3f47","Type":"ContainerStarted","Data":"0a242676294478e6593b24cb5596ae41f7b69921eadc290a33fc8e06dbd225e3"} Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.795130 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" event={"ID":"593f8066-ac54-40b8-a70d-4146a75a4615","Type":"ContainerStarted","Data":"f38ecf45ba2b91e77b10a2edaed0471cb2e9b5c4469e0313c718ecdf4179a9ee"} Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.808764 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f" podStartSLOduration=2.8431332769999997 podStartE2EDuration="11.808743899s" podCreationTimestamp="2025-11-26 08:39:29 +0000 UTC" firstStartedPulling="2025-11-26 08:39:30.514701275 +0000 UTC m=+5942.660912441" lastFinishedPulling="2025-11-26 08:39:39.480311897 +0000 UTC m=+5951.626523063" observedRunningTime="2025-11-26 08:39:40.803233679 +0000 UTC m=+5952.949444845" watchObservedRunningTime="2025-11-26 08:39:40.808743899 +0000 UTC m=+5952.954955065" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.836061 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.864127 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-6zzck" podStartSLOduration=3.011247637 podStartE2EDuration="11.864106781s" podCreationTimestamp="2025-11-26 08:39:29 +0000 UTC" firstStartedPulling="2025-11-26 08:39:30.647719216 +0000 UTC m=+5942.793930382" lastFinishedPulling="2025-11-26 08:39:39.50057836 +0000 UTC m=+5951.646789526" observedRunningTime="2025-11-26 08:39:40.838178224 +0000 UTC m=+5952.984389390" watchObservedRunningTime="2025-11-26 08:39:40.864106781 +0000 UTC m=+5953.010317947" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.873580 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k" podStartSLOduration=2.738050227 podStartE2EDuration="11.873561429s" podCreationTimestamp="2025-11-26 08:39:29 +0000 UTC" firstStartedPulling="2025-11-26 08:39:30.338349239 +0000 UTC m=+5942.484560405" lastFinishedPulling="2025-11-26 08:39:39.473860441 +0000 UTC m=+5951.620071607" observedRunningTime="2025-11-26 08:39:40.868653846 +0000 UTC m=+5953.014865062" watchObservedRunningTime="2025-11-26 08:39:40.873561429 +0000 UTC m=+5953.019772595" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.908879 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mn7t6" podStartSLOduration=2.750170448 podStartE2EDuration="11.908862533s" podCreationTimestamp="2025-11-26 08:39:29 +0000 UTC" firstStartedPulling="2025-11-26 08:39:30.317838519 +0000 UTC m=+5942.464049685" lastFinishedPulling="2025-11-26 08:39:39.476530594 +0000 UTC m=+5951.622741770" observedRunningTime="2025-11-26 08:39:40.905893852 +0000 UTC m=+5953.052105028" watchObservedRunningTime="2025-11-26 08:39:40.908862533 +0000 UTC m=+5953.055073699" Nov 26 08:39:40 crc kubenswrapper[4909]: I1126 08:39:40.934359 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-4sxd6" podStartSLOduration=3.204126673 podStartE2EDuration="11.934347199s" podCreationTimestamp="2025-11-26 08:39:29 +0000 UTC" firstStartedPulling="2025-11-26 08:39:30.744240462 +0000 UTC m=+5942.890451628" lastFinishedPulling="2025-11-26 08:39:39.474460988 +0000 UTC m=+5951.620672154" observedRunningTime="2025-11-26 08:39:40.931006738 +0000 UTC m=+5953.077217904" watchObservedRunningTime="2025-11-26 08:39:40.934347199 +0000 UTC m=+5953.080558365" Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.048404 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-68n7m"] Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.066548 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wtrjc"] Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.076081 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-cs2kv"] Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.084934 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wtrjc"] Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.113546 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-68n7m"] Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.144573 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-cs2kv"] Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.544786 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5af06f-0585-46f5-8e74-1ad13113c497" path="/var/lib/kubelet/pods/2b5af06f-0585-46f5-8e74-1ad13113c497/volumes" Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.545467 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="707619c0-8313-407a-965a-d1d4f6de44d1" path="/var/lib/kubelet/pods/707619c0-8313-407a-965a-d1d4f6de44d1/volumes" Nov 26 08:39:44 crc kubenswrapper[4909]: I1126 08:39:44.546105 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775ccc85-f26e-477f-a010-3ec3418ebadf" path="/var/lib/kubelet/pods/775ccc85-f26e-477f-a010-3ec3418ebadf/volumes" Nov 26 08:39:46 crc kubenswrapper[4909]: I1126 08:39:46.498947 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:39:46 crc kubenswrapper[4909]: I1126 08:39:46.855181 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"ffb78bc615b04ce4741527e4e6ccf051dacb1f8a8314365cec3e038b02b7715e"} Nov 26 08:39:50 crc kubenswrapper[4909]: I1126 08:39:50.120333 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-4sxd6" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.190471 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.191304 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="537df512-9370-4be2-9796-c5ed4615d017" containerName="openstackclient" containerID="cri-o://8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd" gracePeriod=2 Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.201787 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.232216 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 26 08:39:53 crc kubenswrapper[4909]: E1126 08:39:53.232654 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537df512-9370-4be2-9796-c5ed4615d017" containerName="openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.232675 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="537df512-9370-4be2-9796-c5ed4615d017" containerName="openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.232878 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="537df512-9370-4be2-9796-c5ed4615d017" containerName="openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.233551 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.259474 4909 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="537df512-9370-4be2-9796-c5ed4615d017" podUID="bcdfadfe-a37d-400e-8a94-e28e2685cc92" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.294670 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.316652 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9v5d\" (UniqueName: \"kubernetes.io/projected/bcdfadfe-a37d-400e-8a94-e28e2685cc92-kube-api-access-z9v5d\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.316745 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bcdfadfe-a37d-400e-8a94-e28e2685cc92-openstack-config\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.316904 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bcdfadfe-a37d-400e-8a94-e28e2685cc92-openstack-config-secret\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.418670 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bcdfadfe-a37d-400e-8a94-e28e2685cc92-openstack-config-secret\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.418792 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9v5d\" (UniqueName: \"kubernetes.io/projected/bcdfadfe-a37d-400e-8a94-e28e2685cc92-kube-api-access-z9v5d\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.418837 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bcdfadfe-a37d-400e-8a94-e28e2685cc92-openstack-config\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.421771 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bcdfadfe-a37d-400e-8a94-e28e2685cc92-openstack-config\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.428433 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bcdfadfe-a37d-400e-8a94-e28e2685cc92-openstack-config-secret\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.452981 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.454316 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.458936 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7c69q" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.459131 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9v5d\" (UniqueName: \"kubernetes.io/projected/bcdfadfe-a37d-400e-8a94-e28e2685cc92-kube-api-access-z9v5d\") pod \"openstackclient\" (UID: \"bcdfadfe-a37d-400e-8a94-e28e2685cc92\") " pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.492941 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.585118 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.624778 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hzd\" (UniqueName: \"kubernetes.io/projected/5c5a4076-8f8e-4924-bb54-e47258b70aac-kube-api-access-k5hzd\") pod \"kube-state-metrics-0\" (UID: \"5c5a4076-8f8e-4924-bb54-e47258b70aac\") " pod="openstack/kube-state-metrics-0" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.741134 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hzd\" (UniqueName: \"kubernetes.io/projected/5c5a4076-8f8e-4924-bb54-e47258b70aac-kube-api-access-k5hzd\") pod \"kube-state-metrics-0\" (UID: \"5c5a4076-8f8e-4924-bb54-e47258b70aac\") " pod="openstack/kube-state-metrics-0" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.779877 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hzd\" (UniqueName: \"kubernetes.io/projected/5c5a4076-8f8e-4924-bb54-e47258b70aac-kube-api-access-k5hzd\") pod \"kube-state-metrics-0\" (UID: \"5c5a4076-8f8e-4924-bb54-e47258b70aac\") " pod="openstack/kube-state-metrics-0" Nov 26 08:39:53 crc kubenswrapper[4909]: I1126 08:39:53.886860 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.151314 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7761-account-create-76j7n"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.165710 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7761-account-create-76j7n"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.352666 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.354756 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.372943 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.373127 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.373227 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-lqzzj" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.375946 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.394418 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.403113 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.463723 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.463790 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.463827 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/b69a9f41-2ca0-413c-bf21-9dd70af4e486-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.463865 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b69a9f41-2ca0-413c-bf21-9dd70af4e486-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.463901 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.463922 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b69a9f41-2ca0-413c-bf21-9dd70af4e486-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.463953 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84t5k\" (UniqueName: \"kubernetes.io/projected/b69a9f41-2ca0-413c-bf21-9dd70af4e486-kube-api-access-84t5k\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.539861 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99aa8a46-1169-42cc-8693-cd3e0d1f46a5" path="/var/lib/kubelet/pods/99aa8a46-1169-42cc-8693-cd3e0d1f46a5/volumes" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.571317 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.571362 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/b69a9f41-2ca0-413c-bf21-9dd70af4e486-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.571407 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b69a9f41-2ca0-413c-bf21-9dd70af4e486-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.571442 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.571463 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b69a9f41-2ca0-413c-bf21-9dd70af4e486-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.571495 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84t5k\" (UniqueName: \"kubernetes.io/projected/b69a9f41-2ca0-413c-bf21-9dd70af4e486-kube-api-access-84t5k\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.571559 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.584940 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/b69a9f41-2ca0-413c-bf21-9dd70af4e486-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.585828 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.602321 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.608378 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b69a9f41-2ca0-413c-bf21-9dd70af4e486-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.612138 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.614538 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84t5k\" (UniqueName: \"kubernetes.io/projected/b69a9f41-2ca0-413c-bf21-9dd70af4e486-kube-api-access-84t5k\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.615082 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b69a9f41-2ca0-413c-bf21-9dd70af4e486-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.637347 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b69a9f41-2ca0-413c-bf21-9dd70af4e486-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b69a9f41-2ca0-413c-bf21-9dd70af4e486\") " pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.755191 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: W1126 08:39:54.800848 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c5a4076_8f8e_4924_bb54_e47258b70aac.slice/crio-74e1d3a46335ef67f303546b798241c5d506e5d0b35c68ee6bc27dfdfd35f403 WatchSource:0}: Error finding container 74e1d3a46335ef67f303546b798241c5d506e5d0b35c68ee6bc27dfdfd35f403: Status 404 returned error can't find the container with id 74e1d3a46335ef67f303546b798241c5d506e5d0b35c68ee6bc27dfdfd35f403 Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.838139 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.874187 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.876760 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.884170 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.887043 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.887062 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.887318 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.887423 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.887545 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-tnpmd" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.887676 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.972798 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c5a4076-8f8e-4924-bb54-e47258b70aac","Type":"ContainerStarted","Data":"74e1d3a46335ef67f303546b798241c5d506e5d0b35c68ee6bc27dfdfd35f403"} Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.974078 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bcdfadfe-a37d-400e-8a94-e28e2685cc92","Type":"ContainerStarted","Data":"3720318cee639892ff22c3e30389f8f5a578289e0a47a627d9bfb422ab7c485f"} Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990040 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990101 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcg9j\" (UniqueName: \"kubernetes.io/projected/6ebb97fd-8fc5-484a-863b-043deb114430-kube-api-access-lcg9j\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990214 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6ebb97fd-8fc5-484a-863b-043deb114430-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990234 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-config\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990326 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990370 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990391 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ebb97fd-8fc5-484a-863b-043deb114430-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:54 crc kubenswrapper[4909]: I1126 08:39:54.990411 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ebb97fd-8fc5-484a-863b-043deb114430-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.054655 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-30e2-account-create-c5rnd"] Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.074475 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4109-account-create-tt7nv"] Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.091181 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-30e2-account-create-c5rnd"] Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.105940 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6ebb97fd-8fc5-484a-863b-043deb114430-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.106650 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-config\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.107049 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.107865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.108141 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ebb97fd-8fc5-484a-863b-043deb114430-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.108275 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ebb97fd-8fc5-484a-863b-043deb114430-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.109237 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.109364 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcg9j\" (UniqueName: \"kubernetes.io/projected/6ebb97fd-8fc5-484a-863b-043deb114430-kube-api-access-lcg9j\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.109966 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6ebb97fd-8fc5-484a-863b-043deb114430-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.115211 4909 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.115259 4909 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/092ac89b1acd26ff95007c79d1d870d52696d9518cd992e667e0c5de4b0ddc9c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.117104 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ebb97fd-8fc5-484a-863b-043deb114430-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.117472 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.121260 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-config\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.124268 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6ebb97fd-8fc5-484a-863b-043deb114430-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.126332 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ebb97fd-8fc5-484a-863b-043deb114430-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.146295 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcg9j\" (UniqueName: \"kubernetes.io/projected/6ebb97fd-8fc5-484a-863b-043deb114430-kube-api-access-lcg9j\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.167243 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4109-account-create-tt7nv"] Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.213002 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c365d107-a6a9-463a-9bdb-9cc5f632b8f6\") pod \"prometheus-metric-storage-0\" (UID: \"6ebb97fd-8fc5-484a-863b-043deb114430\") " pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.219730 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.364545 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.649217 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.774024 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.834337 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/537df512-9370-4be2-9796-c5ed4615d017-openstack-config\") pod \"537df512-9370-4be2-9796-c5ed4615d017\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.834538 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/537df512-9370-4be2-9796-c5ed4615d017-openstack-config-secret\") pod \"537df512-9370-4be2-9796-c5ed4615d017\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.834611 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp8r8\" (UniqueName: \"kubernetes.io/projected/537df512-9370-4be2-9796-c5ed4615d017-kube-api-access-bp8r8\") pod \"537df512-9370-4be2-9796-c5ed4615d017\" (UID: \"537df512-9370-4be2-9796-c5ed4615d017\") " Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.842909 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537df512-9370-4be2-9796-c5ed4615d017-kube-api-access-bp8r8" (OuterVolumeSpecName: "kube-api-access-bp8r8") pod "537df512-9370-4be2-9796-c5ed4615d017" (UID: "537df512-9370-4be2-9796-c5ed4615d017"). InnerVolumeSpecName "kube-api-access-bp8r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.891647 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537df512-9370-4be2-9796-c5ed4615d017-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "537df512-9370-4be2-9796-c5ed4615d017" (UID: "537df512-9370-4be2-9796-c5ed4615d017"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.893153 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537df512-9370-4be2-9796-c5ed4615d017-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "537df512-9370-4be2-9796-c5ed4615d017" (UID: "537df512-9370-4be2-9796-c5ed4615d017"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.936690 4909 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/537df512-9370-4be2-9796-c5ed4615d017-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.936728 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp8r8\" (UniqueName: \"kubernetes.io/projected/537df512-9370-4be2-9796-c5ed4615d017-kube-api-access-bp8r8\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.936741 4909 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/537df512-9370-4be2-9796-c5ed4615d017-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.995398 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bcdfadfe-a37d-400e-8a94-e28e2685cc92","Type":"ContainerStarted","Data":"64060a5d8868f6d064d36c8faf54df7f9966fd7baa60aa30673d03a80023b63b"} Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.997612 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b69a9f41-2ca0-413c-bf21-9dd70af4e486","Type":"ContainerStarted","Data":"3f79cd107bec3a821da5ed6c983205bf3aa358d43e41c59ed2d5b9a9aff2b697"} Nov 26 08:39:55 crc kubenswrapper[4909]: I1126 08:39:55.999302 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6ebb97fd-8fc5-484a-863b-043deb114430","Type":"ContainerStarted","Data":"2723dfb722c58b9faa0b109fb1fd3efaa741bb36151d294e7acf32db18deb90e"} Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.002043 4909 generic.go:334] "Generic (PLEG): container finished" podID="537df512-9370-4be2-9796-c5ed4615d017" containerID="8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd" exitCode=137 Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.002083 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.002150 4909 scope.go:117] "RemoveContainer" containerID="8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.005247 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c5a4076-8f8e-4924-bb54-e47258b70aac","Type":"ContainerStarted","Data":"3f005aa58eefa994b87d922997ace95b26f09330e639044751a32776ecbd024d"} Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.005526 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.017061 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.017041187 podStartE2EDuration="3.017041187s" podCreationTimestamp="2025-11-26 08:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:39:56.011363252 +0000 UTC m=+5968.157574418" watchObservedRunningTime="2025-11-26 08:39:56.017041187 +0000 UTC m=+5968.163252353" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.028385 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.558209148 podStartE2EDuration="3.028370786s" podCreationTimestamp="2025-11-26 08:39:53 +0000 UTC" firstStartedPulling="2025-11-26 08:39:54.818743257 +0000 UTC m=+5966.964954423" lastFinishedPulling="2025-11-26 08:39:55.288904895 +0000 UTC m=+5967.435116061" observedRunningTime="2025-11-26 08:39:56.023691859 +0000 UTC m=+5968.169903025" watchObservedRunningTime="2025-11-26 08:39:56.028370786 +0000 UTC m=+5968.174581952" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.028491 4909 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="537df512-9370-4be2-9796-c5ed4615d017" podUID="bcdfadfe-a37d-400e-8a94-e28e2685cc92" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.050111 4909 scope.go:117] "RemoveContainer" containerID="8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd" Nov 26 08:39:56 crc kubenswrapper[4909]: E1126 08:39:56.064722 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd\": container with ID starting with 8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd not found: ID does not exist" containerID="8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.064789 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd"} err="failed to get container status \"8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd\": rpc error: code = NotFound desc = could not find container \"8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd\": container with ID starting with 8ea714d914bcf63011e8c2106d22a3f7a03e15d246ff8541e8dde8023622eadd not found: ID does not exist" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.510846 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032f0f9c-e208-4d1f-a169-4d39679324bb" path="/var/lib/kubelet/pods/032f0f9c-e208-4d1f-a169-4d39679324bb/volumes" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.512108 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8" path="/var/lib/kubelet/pods/3dda3a1e-74f3-4183-b9cc-d29fac5fa0d8/volumes" Nov 26 08:39:56 crc kubenswrapper[4909]: I1126 08:39:56.512866 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="537df512-9370-4be2-9796-c5ed4615d017" path="/var/lib/kubelet/pods/537df512-9370-4be2-9796-c5ed4615d017/volumes" Nov 26 08:40:02 crc kubenswrapper[4909]: I1126 08:40:02.079953 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6ebb97fd-8fc5-484a-863b-043deb114430","Type":"ContainerStarted","Data":"e6565dd2b9afcad5c91bfe18b2bafbda9f6bfbefda93fedeb5ddcce5f4601dc3"} Nov 26 08:40:02 crc kubenswrapper[4909]: I1126 08:40:02.083773 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b69a9f41-2ca0-413c-bf21-9dd70af4e486","Type":"ContainerStarted","Data":"30d715503dc9fb57a35c45855947904f66fd6aa6390a4e1d862fc00ce3ed93ae"} Nov 26 08:40:03 crc kubenswrapper[4909]: I1126 08:40:03.896400 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 26 08:40:05 crc kubenswrapper[4909]: I1126 08:40:05.031041 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gv4kn"] Nov 26 08:40:05 crc kubenswrapper[4909]: I1126 08:40:05.040776 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gv4kn"] Nov 26 08:40:06 crc kubenswrapper[4909]: I1126 08:40:06.523567 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83c9ef4-529f-4ea9-8b56-461560b56616" path="/var/lib/kubelet/pods/b83c9ef4-529f-4ea9-8b56-461560b56616/volumes" Nov 26 08:40:10 crc kubenswrapper[4909]: I1126 08:40:10.169960 4909 generic.go:334] "Generic (PLEG): container finished" podID="b69a9f41-2ca0-413c-bf21-9dd70af4e486" containerID="30d715503dc9fb57a35c45855947904f66fd6aa6390a4e1d862fc00ce3ed93ae" exitCode=0 Nov 26 08:40:10 crc kubenswrapper[4909]: I1126 08:40:10.170054 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b69a9f41-2ca0-413c-bf21-9dd70af4e486","Type":"ContainerDied","Data":"30d715503dc9fb57a35c45855947904f66fd6aa6390a4e1d862fc00ce3ed93ae"} Nov 26 08:40:10 crc kubenswrapper[4909]: I1126 08:40:10.177228 4909 generic.go:334] "Generic (PLEG): container finished" podID="6ebb97fd-8fc5-484a-863b-043deb114430" containerID="e6565dd2b9afcad5c91bfe18b2bafbda9f6bfbefda93fedeb5ddcce5f4601dc3" exitCode=0 Nov 26 08:40:10 crc kubenswrapper[4909]: I1126 08:40:10.177277 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6ebb97fd-8fc5-484a-863b-043deb114430","Type":"ContainerDied","Data":"e6565dd2b9afcad5c91bfe18b2bafbda9f6bfbefda93fedeb5ddcce5f4601dc3"} Nov 26 08:40:14 crc kubenswrapper[4909]: I1126 08:40:14.230097 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b69a9f41-2ca0-413c-bf21-9dd70af4e486","Type":"ContainerStarted","Data":"5713e7ffb445a36a4210efa7145c02de10418eb0fb58bde817c6216f5b16d778"} Nov 26 08:40:20 crc kubenswrapper[4909]: I1126 08:40:20.289440 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b69a9f41-2ca0-413c-bf21-9dd70af4e486","Type":"ContainerStarted","Data":"c3ef2df4e325ea127ca5343f5c4cbde13e2a3d04acf05b44b086d88589547d51"} Nov 26 08:40:20 crc kubenswrapper[4909]: I1126 08:40:20.290001 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 26 08:40:20 crc kubenswrapper[4909]: I1126 08:40:20.291473 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6ebb97fd-8fc5-484a-863b-043deb114430","Type":"ContainerStarted","Data":"a456feaa646e410992169fd0da5f4ccf730fc511488c288e337e7a6328f2f5b5"} Nov 26 08:40:20 crc kubenswrapper[4909]: I1126 08:40:20.296307 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 26 08:40:20 crc kubenswrapper[4909]: I1126 08:40:20.317087 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=8.408863674 podStartE2EDuration="26.317064911s" podCreationTimestamp="2025-11-26 08:39:54 +0000 UTC" firstStartedPulling="2025-11-26 08:39:55.421367683 +0000 UTC m=+5967.567578849" lastFinishedPulling="2025-11-26 08:40:13.32956892 +0000 UTC m=+5985.475780086" observedRunningTime="2025-11-26 08:40:20.306704968 +0000 UTC m=+5992.452916144" watchObservedRunningTime="2025-11-26 08:40:20.317064911 +0000 UTC m=+5992.463276087" Nov 26 08:40:23 crc kubenswrapper[4909]: I1126 08:40:23.050162 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-npkqd"] Nov 26 08:40:23 crc kubenswrapper[4909]: I1126 08:40:23.062978 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-npkqd"] Nov 26 08:40:24 crc kubenswrapper[4909]: I1126 08:40:24.033914 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mg6cp"] Nov 26 08:40:24 crc kubenswrapper[4909]: I1126 08:40:24.044088 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mg6cp"] Nov 26 08:40:24 crc kubenswrapper[4909]: I1126 08:40:24.330769 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6ebb97fd-8fc5-484a-863b-043deb114430","Type":"ContainerStarted","Data":"15e330df0d5f78227edb81f4b76ca12a9d0b1fbe908c686d09e998f32216ad48"} Nov 26 08:40:24 crc kubenswrapper[4909]: I1126 08:40:24.521463 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49eabb59-89f9-4fbd-8b96-7a7464bdaf30" path="/var/lib/kubelet/pods/49eabb59-89f9-4fbd-8b96-7a7464bdaf30/volumes" Nov 26 08:40:24 crc kubenswrapper[4909]: I1126 08:40:24.522269 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d00dc1-4670-458f-842f-333fa41779ca" path="/var/lib/kubelet/pods/a4d00dc1-4670-458f-842f-333fa41779ca/volumes" Nov 26 08:40:27 crc kubenswrapper[4909]: I1126 08:40:27.380786 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6ebb97fd-8fc5-484a-863b-043deb114430","Type":"ContainerStarted","Data":"37479467b1eff69294eb8a342439726d3813ac71e50e361d9b4a12802f984e31"} Nov 26 08:40:27 crc kubenswrapper[4909]: I1126 08:40:27.405513 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.355571831 podStartE2EDuration="34.405488598s" podCreationTimestamp="2025-11-26 08:39:53 +0000 UTC" firstStartedPulling="2025-11-26 08:39:55.780157339 +0000 UTC m=+5967.926368505" lastFinishedPulling="2025-11-26 08:40:26.830074106 +0000 UTC m=+5998.976285272" observedRunningTime="2025-11-26 08:40:27.40410801 +0000 UTC m=+5999.550319176" watchObservedRunningTime="2025-11-26 08:40:27.405488598 +0000 UTC m=+5999.551699784" Nov 26 08:40:30 crc kubenswrapper[4909]: I1126 08:40:30.220282 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.084509 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.087421 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.089524 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.089851 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.097498 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.134276 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.134383 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-config-data\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.134408 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.134425 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-log-httpd\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.134443 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh4cg\" (UniqueName: \"kubernetes.io/projected/2425ff56-09dc-4042-ad77-e8dceaae3bfc-kube-api-access-mh4cg\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.134522 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-run-httpd\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.134538 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-scripts\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.236803 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-run-httpd\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.236898 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-scripts\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.236967 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.237093 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-config-data\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.237145 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.237216 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-log-httpd\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.237242 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh4cg\" (UniqueName: \"kubernetes.io/projected/2425ff56-09dc-4042-ad77-e8dceaae3bfc-kube-api-access-mh4cg\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.237891 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-log-httpd\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.238457 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-run-httpd\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.243281 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-config-data\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.243510 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.243651 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-scripts\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.245259 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.267185 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh4cg\" (UniqueName: \"kubernetes.io/projected/2425ff56-09dc-4042-ad77-e8dceaae3bfc-kube-api-access-mh4cg\") pod \"ceilometer-0\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " pod="openstack/ceilometer-0" Nov 26 08:40:32 crc kubenswrapper[4909]: I1126 08:40:32.409197 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:40:33 crc kubenswrapper[4909]: I1126 08:40:33.012365 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:40:33 crc kubenswrapper[4909]: W1126 08:40:33.013869 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2425ff56_09dc_4042_ad77_e8dceaae3bfc.slice/crio-9ecf434f8e698196188a2806300d0d10077e9a51756970a2559cf0e00b2950e0 WatchSource:0}: Error finding container 9ecf434f8e698196188a2806300d0d10077e9a51756970a2559cf0e00b2950e0: Status 404 returned error can't find the container with id 9ecf434f8e698196188a2806300d0d10077e9a51756970a2559cf0e00b2950e0 Nov 26 08:40:33 crc kubenswrapper[4909]: I1126 08:40:33.441365 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerStarted","Data":"9ecf434f8e698196188a2806300d0d10077e9a51756970a2559cf0e00b2950e0"} Nov 26 08:40:34 crc kubenswrapper[4909]: I1126 08:40:34.457130 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerStarted","Data":"6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2"} Nov 26 08:40:35 crc kubenswrapper[4909]: I1126 08:40:35.467562 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerStarted","Data":"60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317"} Nov 26 08:40:36 crc kubenswrapper[4909]: I1126 08:40:36.481312 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerStarted","Data":"32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399"} Nov 26 08:40:37 crc kubenswrapper[4909]: I1126 08:40:37.507060 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerStarted","Data":"a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af"} Nov 26 08:40:37 crc kubenswrapper[4909]: I1126 08:40:37.510643 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 08:40:37 crc kubenswrapper[4909]: I1126 08:40:37.543052 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.043061224 podStartE2EDuration="5.543037551s" podCreationTimestamp="2025-11-26 08:40:32 +0000 UTC" firstStartedPulling="2025-11-26 08:40:33.015654261 +0000 UTC m=+6005.161865447" lastFinishedPulling="2025-11-26 08:40:36.515630578 +0000 UTC m=+6008.661841774" observedRunningTime="2025-11-26 08:40:37.539113663 +0000 UTC m=+6009.685324839" watchObservedRunningTime="2025-11-26 08:40:37.543037551 +0000 UTC m=+6009.689248717" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.220823 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.223485 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.409968 4909 scope.go:117] "RemoveContainer" containerID="2d3a09695f1e8fae8ea2d1bf9d39deba37613d72862c06b637a53e315dce8a34" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.464534 4909 scope.go:117] "RemoveContainer" containerID="7020f8eec2e4d8de75c04b71efa4b6a2066f971e7ab0f1d508998ceeb2cfbabf" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.535680 4909 scope.go:117] "RemoveContainer" containerID="1bf873418f811ad54388b74e43a76a9c80177d6c80cbea33ee392a459c962262" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.538512 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.572113 4909 scope.go:117] "RemoveContainer" containerID="14bbe502d690d1baabfab18762ebc51a4dfcb544907664583813379c6cead027" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.624339 4909 scope.go:117] "RemoveContainer" containerID="573e12a59840bb398166b8ba0be27f790275cbe6794e3294286240c67ecdbb91" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.661573 4909 scope.go:117] "RemoveContainer" containerID="77fcf303adfbfbe695085b81938bf3c2fd7bc42e5dfe9a1a328198cc6b1e72bc" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.738763 4909 scope.go:117] "RemoveContainer" containerID="44b99459fe7c4cb0e7f06b4f118f75d72ae6c7712c7e6d7799b61efbdd561e6e" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.771951 4909 scope.go:117] "RemoveContainer" containerID="45b22906308fa145f2aa9949cf531b960406466c58ab7b4f9f78334e245cf799" Nov 26 08:40:40 crc kubenswrapper[4909]: I1126 08:40:40.804673 4909 scope.go:117] "RemoveContainer" containerID="ea5bf925b062fd3b060411635121d5a4cd113564e0e57d9fcb4ac39c569d9d41" Nov 26 08:40:42 crc kubenswrapper[4909]: I1126 08:40:42.040990 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh4vs"] Nov 26 08:40:42 crc kubenswrapper[4909]: I1126 08:40:42.057361 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xh4vs"] Nov 26 08:40:42 crc kubenswrapper[4909]: I1126 08:40:42.518324 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d633f2-8335-4ee5-be77-603a03d89a91" path="/var/lib/kubelet/pods/27d633f2-8335-4ee5-be77-603a03d89a91/volumes" Nov 26 08:40:44 crc kubenswrapper[4909]: I1126 08:40:44.481386 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-5l2lv"] Nov 26 08:40:44 crc kubenswrapper[4909]: I1126 08:40:44.483003 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5l2lv" Nov 26 08:40:44 crc kubenswrapper[4909]: I1126 08:40:44.491871 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-5l2lv"] Nov 26 08:40:44 crc kubenswrapper[4909]: I1126 08:40:44.619776 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b96c\" (UniqueName: \"kubernetes.io/projected/edf71fe7-e97d-4b60-9af7-a00c41f7d141-kube-api-access-7b96c\") pod \"aodh-db-create-5l2lv\" (UID: \"edf71fe7-e97d-4b60-9af7-a00c41f7d141\") " pod="openstack/aodh-db-create-5l2lv" Nov 26 08:40:44 crc kubenswrapper[4909]: I1126 08:40:44.721983 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b96c\" (UniqueName: \"kubernetes.io/projected/edf71fe7-e97d-4b60-9af7-a00c41f7d141-kube-api-access-7b96c\") pod \"aodh-db-create-5l2lv\" (UID: \"edf71fe7-e97d-4b60-9af7-a00c41f7d141\") " pod="openstack/aodh-db-create-5l2lv" Nov 26 08:40:44 crc kubenswrapper[4909]: I1126 08:40:44.753902 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b96c\" (UniqueName: \"kubernetes.io/projected/edf71fe7-e97d-4b60-9af7-a00c41f7d141-kube-api-access-7b96c\") pod \"aodh-db-create-5l2lv\" (UID: \"edf71fe7-e97d-4b60-9af7-a00c41f7d141\") " pod="openstack/aodh-db-create-5l2lv" Nov 26 08:40:44 crc kubenswrapper[4909]: I1126 08:40:44.798855 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5l2lv" Nov 26 08:40:45 crc kubenswrapper[4909]: W1126 08:40:45.258314 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedf71fe7_e97d_4b60_9af7_a00c41f7d141.slice/crio-c5821555a3cb3fd6d3cea42b6546844859aa2d08f0452620059e008d3eb68c42 WatchSource:0}: Error finding container c5821555a3cb3fd6d3cea42b6546844859aa2d08f0452620059e008d3eb68c42: Status 404 returned error can't find the container with id c5821555a3cb3fd6d3cea42b6546844859aa2d08f0452620059e008d3eb68c42 Nov 26 08:40:45 crc kubenswrapper[4909]: I1126 08:40:45.268619 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-5l2lv"] Nov 26 08:40:45 crc kubenswrapper[4909]: I1126 08:40:45.582556 4909 generic.go:334] "Generic (PLEG): container finished" podID="edf71fe7-e97d-4b60-9af7-a00c41f7d141" containerID="8348115e18368cbe9118777987dabd01579cc87b4bbf282501e8defe512d27cd" exitCode=0 Nov 26 08:40:45 crc kubenswrapper[4909]: I1126 08:40:45.582643 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5l2lv" event={"ID":"edf71fe7-e97d-4b60-9af7-a00c41f7d141","Type":"ContainerDied","Data":"8348115e18368cbe9118777987dabd01579cc87b4bbf282501e8defe512d27cd"} Nov 26 08:40:45 crc kubenswrapper[4909]: I1126 08:40:45.582928 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5l2lv" event={"ID":"edf71fe7-e97d-4b60-9af7-a00c41f7d141","Type":"ContainerStarted","Data":"c5821555a3cb3fd6d3cea42b6546844859aa2d08f0452620059e008d3eb68c42"} Nov 26 08:40:47 crc kubenswrapper[4909]: I1126 08:40:47.039154 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5l2lv" Nov 26 08:40:47 crc kubenswrapper[4909]: I1126 08:40:47.113050 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b96c\" (UniqueName: \"kubernetes.io/projected/edf71fe7-e97d-4b60-9af7-a00c41f7d141-kube-api-access-7b96c\") pod \"edf71fe7-e97d-4b60-9af7-a00c41f7d141\" (UID: \"edf71fe7-e97d-4b60-9af7-a00c41f7d141\") " Nov 26 08:40:47 crc kubenswrapper[4909]: I1126 08:40:47.119932 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf71fe7-e97d-4b60-9af7-a00c41f7d141-kube-api-access-7b96c" (OuterVolumeSpecName: "kube-api-access-7b96c") pod "edf71fe7-e97d-4b60-9af7-a00c41f7d141" (UID: "edf71fe7-e97d-4b60-9af7-a00c41f7d141"). InnerVolumeSpecName "kube-api-access-7b96c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:40:47 crc kubenswrapper[4909]: I1126 08:40:47.214721 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b96c\" (UniqueName: \"kubernetes.io/projected/edf71fe7-e97d-4b60-9af7-a00c41f7d141-kube-api-access-7b96c\") on node \"crc\" DevicePath \"\"" Nov 26 08:40:47 crc kubenswrapper[4909]: I1126 08:40:47.610117 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5l2lv" event={"ID":"edf71fe7-e97d-4b60-9af7-a00c41f7d141","Type":"ContainerDied","Data":"c5821555a3cb3fd6d3cea42b6546844859aa2d08f0452620059e008d3eb68c42"} Nov 26 08:40:47 crc kubenswrapper[4909]: I1126 08:40:47.610163 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5821555a3cb3fd6d3cea42b6546844859aa2d08f0452620059e008d3eb68c42" Nov 26 08:40:47 crc kubenswrapper[4909]: I1126 08:40:47.610229 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5l2lv" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.597763 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-fc5d-account-create-t8f2m"] Nov 26 08:40:54 crc kubenswrapper[4909]: E1126 08:40:54.598920 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf71fe7-e97d-4b60-9af7-a00c41f7d141" containerName="mariadb-database-create" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.598941 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf71fe7-e97d-4b60-9af7-a00c41f7d141" containerName="mariadb-database-create" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.599261 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf71fe7-e97d-4b60-9af7-a00c41f7d141" containerName="mariadb-database-create" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.600187 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fc5d-account-create-t8f2m" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.602154 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.612482 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fc5d-account-create-t8f2m"] Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.694517 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qmg\" (UniqueName: \"kubernetes.io/projected/994257ba-d9f2-49c5-bc46-ef44428bdbe9-kube-api-access-l5qmg\") pod \"aodh-fc5d-account-create-t8f2m\" (UID: \"994257ba-d9f2-49c5-bc46-ef44428bdbe9\") " pod="openstack/aodh-fc5d-account-create-t8f2m" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.796963 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qmg\" (UniqueName: \"kubernetes.io/projected/994257ba-d9f2-49c5-bc46-ef44428bdbe9-kube-api-access-l5qmg\") pod \"aodh-fc5d-account-create-t8f2m\" (UID: \"994257ba-d9f2-49c5-bc46-ef44428bdbe9\") " pod="openstack/aodh-fc5d-account-create-t8f2m" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.816472 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qmg\" (UniqueName: \"kubernetes.io/projected/994257ba-d9f2-49c5-bc46-ef44428bdbe9-kube-api-access-l5qmg\") pod \"aodh-fc5d-account-create-t8f2m\" (UID: \"994257ba-d9f2-49c5-bc46-ef44428bdbe9\") " pod="openstack/aodh-fc5d-account-create-t8f2m" Nov 26 08:40:54 crc kubenswrapper[4909]: I1126 08:40:54.918420 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fc5d-account-create-t8f2m" Nov 26 08:40:55 crc kubenswrapper[4909]: I1126 08:40:55.392733 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fc5d-account-create-t8f2m"] Nov 26 08:40:55 crc kubenswrapper[4909]: I1126 08:40:55.696210 4909 generic.go:334] "Generic (PLEG): container finished" podID="994257ba-d9f2-49c5-bc46-ef44428bdbe9" containerID="86e3a0fc39e157287bf91235111cd2ea6eec93a963dadada22da5aa12f9182b7" exitCode=0 Nov 26 08:40:55 crc kubenswrapper[4909]: I1126 08:40:55.696364 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fc5d-account-create-t8f2m" event={"ID":"994257ba-d9f2-49c5-bc46-ef44428bdbe9","Type":"ContainerDied","Data":"86e3a0fc39e157287bf91235111cd2ea6eec93a963dadada22da5aa12f9182b7"} Nov 26 08:40:55 crc kubenswrapper[4909]: I1126 08:40:55.696571 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fc5d-account-create-t8f2m" event={"ID":"994257ba-d9f2-49c5-bc46-ef44428bdbe9","Type":"ContainerStarted","Data":"9de16b5e0466384b749d4353be669cffc4f931a7b920c14b9baa224d9928664f"} Nov 26 08:40:57 crc kubenswrapper[4909]: I1126 08:40:57.196783 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fc5d-account-create-t8f2m" Nov 26 08:40:57 crc kubenswrapper[4909]: I1126 08:40:57.353142 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5qmg\" (UniqueName: \"kubernetes.io/projected/994257ba-d9f2-49c5-bc46-ef44428bdbe9-kube-api-access-l5qmg\") pod \"994257ba-d9f2-49c5-bc46-ef44428bdbe9\" (UID: \"994257ba-d9f2-49c5-bc46-ef44428bdbe9\") " Nov 26 08:40:57 crc kubenswrapper[4909]: I1126 08:40:57.359916 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994257ba-d9f2-49c5-bc46-ef44428bdbe9-kube-api-access-l5qmg" (OuterVolumeSpecName: "kube-api-access-l5qmg") pod "994257ba-d9f2-49c5-bc46-ef44428bdbe9" (UID: "994257ba-d9f2-49c5-bc46-ef44428bdbe9"). InnerVolumeSpecName "kube-api-access-l5qmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:40:57 crc kubenswrapper[4909]: I1126 08:40:57.456987 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5qmg\" (UniqueName: \"kubernetes.io/projected/994257ba-d9f2-49c5-bc46-ef44428bdbe9-kube-api-access-l5qmg\") on node \"crc\" DevicePath \"\"" Nov 26 08:40:57 crc kubenswrapper[4909]: I1126 08:40:57.720853 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fc5d-account-create-t8f2m" event={"ID":"994257ba-d9f2-49c5-bc46-ef44428bdbe9","Type":"ContainerDied","Data":"9de16b5e0466384b749d4353be669cffc4f931a7b920c14b9baa224d9928664f"} Nov 26 08:40:57 crc kubenswrapper[4909]: I1126 08:40:57.720895 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9de16b5e0466384b749d4353be669cffc4f931a7b920c14b9baa224d9928664f" Nov 26 08:40:57 crc kubenswrapper[4909]: I1126 08:40:57.720934 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fc5d-account-create-t8f2m" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.971793 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-jhvlc"] Nov 26 08:40:59 crc kubenswrapper[4909]: E1126 08:40:59.973124 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994257ba-d9f2-49c5-bc46-ef44428bdbe9" containerName="mariadb-account-create" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.973141 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="994257ba-d9f2-49c5-bc46-ef44428bdbe9" containerName="mariadb-account-create" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.973379 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="994257ba-d9f2-49c5-bc46-ef44428bdbe9" containerName="mariadb-account-create" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.974302 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.977435 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.977964 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.978554 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k245f" Nov 26 08:40:59 crc kubenswrapper[4909]: I1126 08:40:59.995950 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jhvlc"] Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.035979 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-scripts\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.036223 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-combined-ca-bundle\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.036363 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-config-data\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.036422 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdd7\" (UniqueName: \"kubernetes.io/projected/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-kube-api-access-pwdd7\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.138106 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-combined-ca-bundle\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.138223 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-config-data\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.138304 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwdd7\" (UniqueName: \"kubernetes.io/projected/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-kube-api-access-pwdd7\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.138403 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-scripts\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.147585 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-scripts\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.148139 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-config-data\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.157516 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-combined-ca-bundle\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.158228 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwdd7\" (UniqueName: \"kubernetes.io/projected/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-kube-api-access-pwdd7\") pod \"aodh-db-sync-jhvlc\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.310489 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:00 crc kubenswrapper[4909]: I1126 08:41:00.869546 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jhvlc"] Nov 26 08:41:01 crc kubenswrapper[4909]: I1126 08:41:01.773462 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jhvlc" event={"ID":"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7","Type":"ContainerStarted","Data":"c49dfb5c00d63bb725328812c5354c32652e35f520582bbebb4ca88f08a4d3cc"} Nov 26 08:41:02 crc kubenswrapper[4909]: I1126 08:41:02.423121 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 26 08:41:05 crc kubenswrapper[4909]: I1126 08:41:05.816154 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jhvlc" event={"ID":"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7","Type":"ContainerStarted","Data":"e28686a850ebcf4a11463e12bfc074f25f1cc1a3a80543d004920b96839f6897"} Nov 26 08:41:05 crc kubenswrapper[4909]: I1126 08:41:05.839362 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-jhvlc" podStartSLOduration=2.343690649 podStartE2EDuration="6.839338722s" podCreationTimestamp="2025-11-26 08:40:59 +0000 UTC" firstStartedPulling="2025-11-26 08:41:00.872182305 +0000 UTC m=+6033.018393471" lastFinishedPulling="2025-11-26 08:41:05.367830378 +0000 UTC m=+6037.514041544" observedRunningTime="2025-11-26 08:41:05.833845292 +0000 UTC m=+6037.980056458" watchObservedRunningTime="2025-11-26 08:41:05.839338722 +0000 UTC m=+6037.985549898" Nov 26 08:41:07 crc kubenswrapper[4909]: I1126 08:41:07.836769 4909 generic.go:334] "Generic (PLEG): container finished" podID="a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" containerID="e28686a850ebcf4a11463e12bfc074f25f1cc1a3a80543d004920b96839f6897" exitCode=0 Nov 26 08:41:07 crc kubenswrapper[4909]: I1126 08:41:07.836858 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jhvlc" event={"ID":"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7","Type":"ContainerDied","Data":"e28686a850ebcf4a11463e12bfc074f25f1cc1a3a80543d004920b96839f6897"} Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.225439 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.327448 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-config-data\") pod \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.327579 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwdd7\" (UniqueName: \"kubernetes.io/projected/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-kube-api-access-pwdd7\") pod \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.327902 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-scripts\") pod \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.328154 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-combined-ca-bundle\") pod \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\" (UID: \"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7\") " Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.335709 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-kube-api-access-pwdd7" (OuterVolumeSpecName: "kube-api-access-pwdd7") pod "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" (UID: "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7"). InnerVolumeSpecName "kube-api-access-pwdd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.336862 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-scripts" (OuterVolumeSpecName: "scripts") pod "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" (UID: "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.359071 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-config-data" (OuterVolumeSpecName: "config-data") pod "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" (UID: "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.374112 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" (UID: "a1fa13bf-535b-4d94-9f7a-87f8ab536ad7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.434180 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.434684 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwdd7\" (UniqueName: \"kubernetes.io/projected/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-kube-api-access-pwdd7\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.434887 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.435046 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.865030 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jhvlc" event={"ID":"a1fa13bf-535b-4d94-9f7a-87f8ab536ad7","Type":"ContainerDied","Data":"c49dfb5c00d63bb725328812c5354c32652e35f520582bbebb4ca88f08a4d3cc"} Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.865085 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c49dfb5c00d63bb725328812c5354c32652e35f520582bbebb4ca88f08a4d3cc" Nov 26 08:41:09 crc kubenswrapper[4909]: I1126 08:41:09.865111 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jhvlc" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.051929 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 26 08:41:10 crc kubenswrapper[4909]: E1126 08:41:10.052464 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" containerName="aodh-db-sync" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.052483 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" containerName="aodh-db-sync" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.052739 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" containerName="aodh-db-sync" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.055103 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.068173 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.068234 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k245f" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.068323 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.089919 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.151083 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-config-data\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.151766 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-scripts\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.151938 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqgz4\" (UniqueName: \"kubernetes.io/projected/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-kube-api-access-vqgz4\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.151997 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.254257 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqgz4\" (UniqueName: \"kubernetes.io/projected/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-kube-api-access-vqgz4\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.254353 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.254612 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-config-data\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.254703 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-scripts\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.259561 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.259801 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-config-data\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.259990 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-scripts\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.270251 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqgz4\" (UniqueName: \"kubernetes.io/projected/820cfd9c-2ab4-4660-90d8-5664ba4ae34e-kube-api-access-vqgz4\") pod \"aodh-0\" (UID: \"820cfd9c-2ab4-4660-90d8-5664ba4ae34e\") " pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.380188 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 26 08:41:10 crc kubenswrapper[4909]: I1126 08:41:10.872895 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 26 08:41:11 crc kubenswrapper[4909]: I1126 08:41:11.886932 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"820cfd9c-2ab4-4660-90d8-5664ba4ae34e","Type":"ContainerStarted","Data":"a4ae5f72bf7cfc979d71cc9cba045426ec55499ca2d4b9375678d88edcd6e20b"} Nov 26 08:41:11 crc kubenswrapper[4909]: I1126 08:41:11.887338 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"820cfd9c-2ab4-4660-90d8-5664ba4ae34e","Type":"ContainerStarted","Data":"2f7ea00a385989c63743dd3738377754df1a382a47163b0fcbc15747105c105b"} Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.057483 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.057891 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-central-agent" containerID="cri-o://6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2" gracePeriod=30 Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.058252 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="proxy-httpd" containerID="cri-o://a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af" gracePeriod=30 Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.058443 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-notification-agent" containerID="cri-o://60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317" gracePeriod=30 Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.058531 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="sg-core" containerID="cri-o://32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399" gracePeriod=30 Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.897973 4909 generic.go:334] "Generic (PLEG): container finished" podID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerID="a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af" exitCode=0 Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.898287 4909 generic.go:334] "Generic (PLEG): container finished" podID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerID="32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399" exitCode=2 Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.898298 4909 generic.go:334] "Generic (PLEG): container finished" podID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerID="6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2" exitCode=0 Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.898032 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerDied","Data":"a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af"} Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.898332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerDied","Data":"32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399"} Nov 26 08:41:12 crc kubenswrapper[4909]: I1126 08:41:12.898345 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerDied","Data":"6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2"} Nov 26 08:41:13 crc kubenswrapper[4909]: I1126 08:41:13.910976 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"820cfd9c-2ab4-4660-90d8-5664ba4ae34e","Type":"ContainerStarted","Data":"7b48b6639e721554a5990f627d7a9166083fc0cd33df4b1885ceacd463b9abc3"} Nov 26 08:41:14 crc kubenswrapper[4909]: I1126 08:41:14.923551 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"820cfd9c-2ab4-4660-90d8-5664ba4ae34e","Type":"ContainerStarted","Data":"a06c53f0b785f6598178d6897bfa304089dcedd3b14d496c4a264b3ae5ebdca2"} Nov 26 08:41:16 crc kubenswrapper[4909]: I1126 08:41:16.945522 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"820cfd9c-2ab4-4660-90d8-5664ba4ae34e","Type":"ContainerStarted","Data":"c56e03de8c01135ac1a0ffde90ff889c5f7239652edb7c1896d0c7fbe3edbb82"} Nov 26 08:41:16 crc kubenswrapper[4909]: I1126 08:41:16.980590 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.604684582 podStartE2EDuration="6.980570949s" podCreationTimestamp="2025-11-26 08:41:10 +0000 UTC" firstStartedPulling="2025-11-26 08:41:10.880968512 +0000 UTC m=+6043.027179678" lastFinishedPulling="2025-11-26 08:41:16.256854869 +0000 UTC m=+6048.403066045" observedRunningTime="2025-11-26 08:41:16.969369163 +0000 UTC m=+6049.115580329" watchObservedRunningTime="2025-11-26 08:41:16.980570949 +0000 UTC m=+6049.126782115" Nov 26 08:41:17 crc kubenswrapper[4909]: I1126 08:41:17.863810 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:41:17 crc kubenswrapper[4909]: I1126 08:41:17.961641 4909 generic.go:334] "Generic (PLEG): container finished" podID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerID="60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317" exitCode=0 Nov 26 08:41:17 crc kubenswrapper[4909]: I1126 08:41:17.961705 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:41:17 crc kubenswrapper[4909]: I1126 08:41:17.961756 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerDied","Data":"60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317"} Nov 26 08:41:17 crc kubenswrapper[4909]: I1126 08:41:17.961852 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2425ff56-09dc-4042-ad77-e8dceaae3bfc","Type":"ContainerDied","Data":"9ecf434f8e698196188a2806300d0d10077e9a51756970a2559cf0e00b2950e0"} Nov 26 08:41:17 crc kubenswrapper[4909]: I1126 08:41:17.961876 4909 scope.go:117] "RemoveContainer" containerID="a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af" Nov 26 08:41:17 crc kubenswrapper[4909]: I1126 08:41:17.990909 4909 scope.go:117] "RemoveContainer" containerID="32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006517 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-run-httpd\") pod \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006611 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-config-data\") pod \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006646 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-sg-core-conf-yaml\") pod \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006677 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-scripts\") pod \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006740 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-combined-ca-bundle\") pod \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006812 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-log-httpd\") pod \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006833 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh4cg\" (UniqueName: \"kubernetes.io/projected/2425ff56-09dc-4042-ad77-e8dceaae3bfc-kube-api-access-mh4cg\") pod \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\" (UID: \"2425ff56-09dc-4042-ad77-e8dceaae3bfc\") " Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.006946 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2425ff56-09dc-4042-ad77-e8dceaae3bfc" (UID: "2425ff56-09dc-4042-ad77-e8dceaae3bfc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.007254 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.008242 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2425ff56-09dc-4042-ad77-e8dceaae3bfc" (UID: "2425ff56-09dc-4042-ad77-e8dceaae3bfc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.013144 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-scripts" (OuterVolumeSpecName: "scripts") pod "2425ff56-09dc-4042-ad77-e8dceaae3bfc" (UID: "2425ff56-09dc-4042-ad77-e8dceaae3bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.020792 4909 scope.go:117] "RemoveContainer" containerID="60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.020816 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2425ff56-09dc-4042-ad77-e8dceaae3bfc-kube-api-access-mh4cg" (OuterVolumeSpecName: "kube-api-access-mh4cg") pod "2425ff56-09dc-4042-ad77-e8dceaae3bfc" (UID: "2425ff56-09dc-4042-ad77-e8dceaae3bfc"). InnerVolumeSpecName "kube-api-access-mh4cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.044824 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2425ff56-09dc-4042-ad77-e8dceaae3bfc" (UID: "2425ff56-09dc-4042-ad77-e8dceaae3bfc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.109615 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2425ff56-09dc-4042-ad77-e8dceaae3bfc-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.109656 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh4cg\" (UniqueName: \"kubernetes.io/projected/2425ff56-09dc-4042-ad77-e8dceaae3bfc-kube-api-access-mh4cg\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.109669 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.109680 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.113838 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2425ff56-09dc-4042-ad77-e8dceaae3bfc" (UID: "2425ff56-09dc-4042-ad77-e8dceaae3bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.129201 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-config-data" (OuterVolumeSpecName: "config-data") pod "2425ff56-09dc-4042-ad77-e8dceaae3bfc" (UID: "2425ff56-09dc-4042-ad77-e8dceaae3bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.192634 4909 scope.go:117] "RemoveContainer" containerID="6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.212064 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.212110 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2425ff56-09dc-4042-ad77-e8dceaae3bfc-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.229012 4909 scope.go:117] "RemoveContainer" containerID="a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af" Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.232214 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af\": container with ID starting with a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af not found: ID does not exist" containerID="a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.232251 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af"} err="failed to get container status \"a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af\": rpc error: code = NotFound desc = could not find container \"a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af\": container with ID starting with a7b73a47e3c1d575103f14a1c4b749eca40ab79ff0524380d5e56ec18987a2af not found: ID does not exist" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.232273 4909 scope.go:117] "RemoveContainer" containerID="32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399" Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.236105 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399\": container with ID starting with 32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399 not found: ID does not exist" containerID="32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.236136 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399"} err="failed to get container status \"32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399\": rpc error: code = NotFound desc = could not find container \"32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399\": container with ID starting with 32dd2b55a8fe5c1d8920eb2df5e1665fc6db625bda55662fbf0c871de3494399 not found: ID does not exist" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.236152 4909 scope.go:117] "RemoveContainer" containerID="60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317" Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.236682 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317\": container with ID starting with 60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317 not found: ID does not exist" containerID="60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.236705 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317"} err="failed to get container status \"60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317\": rpc error: code = NotFound desc = could not find container \"60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317\": container with ID starting with 60469f72ea25947992fed22d66008dd2ee2bbab316e9360ed05fcc5f3e256317 not found: ID does not exist" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.236716 4909 scope.go:117] "RemoveContainer" containerID="6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2" Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.237126 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2\": container with ID starting with 6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2 not found: ID does not exist" containerID="6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.237147 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2"} err="failed to get container status \"6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2\": rpc error: code = NotFound desc = could not find container \"6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2\": container with ID starting with 6871f0194c1809b5b08a5d2ca7884f4298e41daf77b5557e99ed2845b3febbc2 not found: ID does not exist" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.313216 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.333585 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.348776 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.349322 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-central-agent" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349345 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-central-agent" Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.349382 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="proxy-httpd" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349406 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="proxy-httpd" Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.349426 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="sg-core" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349436 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="sg-core" Nov 26 08:41:18 crc kubenswrapper[4909]: E1126 08:41:18.349464 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-notification-agent" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349472 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-notification-agent" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349724 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-notification-agent" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349752 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="ceilometer-central-agent" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349783 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="proxy-httpd" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.349803 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" containerName="sg-core" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.352206 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.355428 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.356034 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.363887 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.420484 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-config-data\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.420629 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-run-httpd\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.420833 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-scripts\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.420929 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgt4\" (UniqueName: \"kubernetes.io/projected/27e4c6ae-206c-49af-9994-4ae242693621-kube-api-access-rdgt4\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.421020 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.421092 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-log-httpd\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.421188 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.522787 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-scripts\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.523239 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgt4\" (UniqueName: \"kubernetes.io/projected/27e4c6ae-206c-49af-9994-4ae242693621-kube-api-access-rdgt4\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.523283 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.523439 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-log-httpd\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.523585 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.523666 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-config-data\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.523817 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-run-httpd\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.523930 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2425ff56-09dc-4042-ad77-e8dceaae3bfc" path="/var/lib/kubelet/pods/2425ff56-09dc-4042-ad77-e8dceaae3bfc/volumes" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.524672 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-run-httpd\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.524845 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-log-httpd\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.528357 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.529202 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.529623 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-scripts\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.539430 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-config-data\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.542260 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgt4\" (UniqueName: \"kubernetes.io/projected/27e4c6ae-206c-49af-9994-4ae242693621-kube-api-access-rdgt4\") pod \"ceilometer-0\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " pod="openstack/ceilometer-0" Nov 26 08:41:18 crc kubenswrapper[4909]: I1126 08:41:18.699309 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:41:19 crc kubenswrapper[4909]: W1126 08:41:19.214332 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e4c6ae_206c_49af_9994_4ae242693621.slice/crio-f6407e62495f22c6741016d7d2990033e754f47eb80906be7c8dbe5264cb8c9e WatchSource:0}: Error finding container f6407e62495f22c6741016d7d2990033e754f47eb80906be7c8dbe5264cb8c9e: Status 404 returned error can't find the container with id f6407e62495f22c6741016d7d2990033e754f47eb80906be7c8dbe5264cb8c9e Nov 26 08:41:19 crc kubenswrapper[4909]: I1126 08:41:19.215029 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:41:19 crc kubenswrapper[4909]: I1126 08:41:19.986365 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerStarted","Data":"10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce"} Nov 26 08:41:19 crc kubenswrapper[4909]: I1126 08:41:19.986906 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerStarted","Data":"f6407e62495f22c6741016d7d2990033e754f47eb80906be7c8dbe5264cb8c9e"} Nov 26 08:41:21 crc kubenswrapper[4909]: I1126 08:41:21.014135 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerStarted","Data":"59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484"} Nov 26 08:41:22 crc kubenswrapper[4909]: I1126 08:41:22.025032 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerStarted","Data":"8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4"} Nov 26 08:41:22 crc kubenswrapper[4909]: I1126 08:41:22.764122 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-5d4fw"] Nov 26 08:41:22 crc kubenswrapper[4909]: I1126 08:41:22.766754 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5d4fw" Nov 26 08:41:22 crc kubenswrapper[4909]: I1126 08:41:22.776717 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-5d4fw"] Nov 26 08:41:22 crc kubenswrapper[4909]: I1126 08:41:22.834078 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58z2\" (UniqueName: \"kubernetes.io/projected/5a92a58e-edd3-4da6-bbd3-c7dc41189ab5-kube-api-access-n58z2\") pod \"manila-db-create-5d4fw\" (UID: \"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5\") " pod="openstack/manila-db-create-5d4fw" Nov 26 08:41:22 crc kubenswrapper[4909]: I1126 08:41:22.935746 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58z2\" (UniqueName: \"kubernetes.io/projected/5a92a58e-edd3-4da6-bbd3-c7dc41189ab5-kube-api-access-n58z2\") pod \"manila-db-create-5d4fw\" (UID: \"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5\") " pod="openstack/manila-db-create-5d4fw" Nov 26 08:41:22 crc kubenswrapper[4909]: I1126 08:41:22.955717 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58z2\" (UniqueName: \"kubernetes.io/projected/5a92a58e-edd3-4da6-bbd3-c7dc41189ab5-kube-api-access-n58z2\") pod \"manila-db-create-5d4fw\" (UID: \"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5\") " pod="openstack/manila-db-create-5d4fw" Nov 26 08:41:23 crc kubenswrapper[4909]: I1126 08:41:23.037765 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerStarted","Data":"46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5"} Nov 26 08:41:23 crc kubenswrapper[4909]: I1126 08:41:23.037936 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 08:41:23 crc kubenswrapper[4909]: I1126 08:41:23.070833 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.112125894 podStartE2EDuration="5.070815011s" podCreationTimestamp="2025-11-26 08:41:18 +0000 UTC" firstStartedPulling="2025-11-26 08:41:19.218801553 +0000 UTC m=+6051.365012719" lastFinishedPulling="2025-11-26 08:41:22.17749067 +0000 UTC m=+6054.323701836" observedRunningTime="2025-11-26 08:41:23.064737846 +0000 UTC m=+6055.210949012" watchObservedRunningTime="2025-11-26 08:41:23.070815011 +0000 UTC m=+6055.217026177" Nov 26 08:41:23 crc kubenswrapper[4909]: I1126 08:41:23.088581 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5d4fw" Nov 26 08:41:23 crc kubenswrapper[4909]: I1126 08:41:23.654371 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-5d4fw"] Nov 26 08:41:24 crc kubenswrapper[4909]: I1126 08:41:24.044131 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gsl8q"] Nov 26 08:41:24 crc kubenswrapper[4909]: I1126 08:41:24.051584 4909 generic.go:334] "Generic (PLEG): container finished" podID="5a92a58e-edd3-4da6-bbd3-c7dc41189ab5" containerID="82008ba324306166c1092f5fd6df1e3c53ce1079c21a51b88fc6d5eec46d6a8c" exitCode=0 Nov 26 08:41:24 crc kubenswrapper[4909]: I1126 08:41:24.052273 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-5d4fw" event={"ID":"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5","Type":"ContainerDied","Data":"82008ba324306166c1092f5fd6df1e3c53ce1079c21a51b88fc6d5eec46d6a8c"} Nov 26 08:41:24 crc kubenswrapper[4909]: I1126 08:41:24.052377 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-5d4fw" event={"ID":"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5","Type":"ContainerStarted","Data":"18a2fb20a297fe3e03aa316fa7c957277c492d4fdbe2969fd795b2ae8912eefd"} Nov 26 08:41:24 crc kubenswrapper[4909]: I1126 08:41:24.054776 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gsl8q"] Nov 26 08:41:24 crc kubenswrapper[4909]: I1126 08:41:24.514468 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ea1c00-0636-4f6b-982d-a5fc0ca216be" path="/var/lib/kubelet/pods/36ea1c00-0636-4f6b-982d-a5fc0ca216be/volumes" Nov 26 08:41:25 crc kubenswrapper[4909]: I1126 08:41:25.584121 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5d4fw" Nov 26 08:41:25 crc kubenswrapper[4909]: I1126 08:41:25.701491 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n58z2\" (UniqueName: \"kubernetes.io/projected/5a92a58e-edd3-4da6-bbd3-c7dc41189ab5-kube-api-access-n58z2\") pod \"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5\" (UID: \"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5\") " Nov 26 08:41:25 crc kubenswrapper[4909]: I1126 08:41:25.727811 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a92a58e-edd3-4da6-bbd3-c7dc41189ab5-kube-api-access-n58z2" (OuterVolumeSpecName: "kube-api-access-n58z2") pod "5a92a58e-edd3-4da6-bbd3-c7dc41189ab5" (UID: "5a92a58e-edd3-4da6-bbd3-c7dc41189ab5"). InnerVolumeSpecName "kube-api-access-n58z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:41:25 crc kubenswrapper[4909]: I1126 08:41:25.804305 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n58z2\" (UniqueName: \"kubernetes.io/projected/5a92a58e-edd3-4da6-bbd3-c7dc41189ab5-kube-api-access-n58z2\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:26 crc kubenswrapper[4909]: I1126 08:41:26.086563 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-5d4fw" event={"ID":"5a92a58e-edd3-4da6-bbd3-c7dc41189ab5","Type":"ContainerDied","Data":"18a2fb20a297fe3e03aa316fa7c957277c492d4fdbe2969fd795b2ae8912eefd"} Nov 26 08:41:26 crc kubenswrapper[4909]: I1126 08:41:26.086630 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18a2fb20a297fe3e03aa316fa7c957277c492d4fdbe2969fd795b2ae8912eefd" Nov 26 08:41:26 crc kubenswrapper[4909]: I1126 08:41:26.086692 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5d4fw" Nov 26 08:41:32 crc kubenswrapper[4909]: I1126 08:41:32.849498 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-51b4-account-create-4dxrc"] Nov 26 08:41:32 crc kubenswrapper[4909]: E1126 08:41:32.850469 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a92a58e-edd3-4da6-bbd3-c7dc41189ab5" containerName="mariadb-database-create" Nov 26 08:41:32 crc kubenswrapper[4909]: I1126 08:41:32.850481 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a92a58e-edd3-4da6-bbd3-c7dc41189ab5" containerName="mariadb-database-create" Nov 26 08:41:32 crc kubenswrapper[4909]: I1126 08:41:32.850736 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a92a58e-edd3-4da6-bbd3-c7dc41189ab5" containerName="mariadb-database-create" Nov 26 08:41:32 crc kubenswrapper[4909]: I1126 08:41:32.851426 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-51b4-account-create-4dxrc" Nov 26 08:41:32 crc kubenswrapper[4909]: I1126 08:41:32.855736 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 26 08:41:32 crc kubenswrapper[4909]: I1126 08:41:32.861524 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-51b4-account-create-4dxrc"] Nov 26 08:41:32 crc kubenswrapper[4909]: I1126 08:41:32.972272 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5gmh\" (UniqueName: \"kubernetes.io/projected/c2e3c678-4a1f-4fa3-9e4d-cb414adcca54-kube-api-access-b5gmh\") pod \"manila-51b4-account-create-4dxrc\" (UID: \"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54\") " pod="openstack/manila-51b4-account-create-4dxrc" Nov 26 08:41:33 crc kubenswrapper[4909]: I1126 08:41:33.074886 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5gmh\" (UniqueName: \"kubernetes.io/projected/c2e3c678-4a1f-4fa3-9e4d-cb414adcca54-kube-api-access-b5gmh\") pod \"manila-51b4-account-create-4dxrc\" (UID: \"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54\") " pod="openstack/manila-51b4-account-create-4dxrc" Nov 26 08:41:33 crc kubenswrapper[4909]: I1126 08:41:33.096690 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5gmh\" (UniqueName: \"kubernetes.io/projected/c2e3c678-4a1f-4fa3-9e4d-cb414adcca54-kube-api-access-b5gmh\") pod \"manila-51b4-account-create-4dxrc\" (UID: \"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54\") " pod="openstack/manila-51b4-account-create-4dxrc" Nov 26 08:41:33 crc kubenswrapper[4909]: I1126 08:41:33.178440 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-51b4-account-create-4dxrc" Nov 26 08:41:33 crc kubenswrapper[4909]: I1126 08:41:33.676572 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-51b4-account-create-4dxrc"] Nov 26 08:41:33 crc kubenswrapper[4909]: W1126 08:41:33.687340 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2e3c678_4a1f_4fa3_9e4d_cb414adcca54.slice/crio-2fba7a2b836bd1c5605948eeacefcc91056b5d43a7cf81cb873bc0d2ddd69075 WatchSource:0}: Error finding container 2fba7a2b836bd1c5605948eeacefcc91056b5d43a7cf81cb873bc0d2ddd69075: Status 404 returned error can't find the container with id 2fba7a2b836bd1c5605948eeacefcc91056b5d43a7cf81cb873bc0d2ddd69075 Nov 26 08:41:34 crc kubenswrapper[4909]: I1126 08:41:34.030740 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-55e3-account-create-rknvq"] Nov 26 08:41:34 crc kubenswrapper[4909]: I1126 08:41:34.040988 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-55e3-account-create-rknvq"] Nov 26 08:41:34 crc kubenswrapper[4909]: I1126 08:41:34.188540 4909 generic.go:334] "Generic (PLEG): container finished" podID="c2e3c678-4a1f-4fa3-9e4d-cb414adcca54" containerID="1e873411e0fef74689c76ae55bc2f3e6d2f99623e686822a67cc3e4ed5b49fef" exitCode=0 Nov 26 08:41:34 crc kubenswrapper[4909]: I1126 08:41:34.188580 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-51b4-account-create-4dxrc" event={"ID":"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54","Type":"ContainerDied","Data":"1e873411e0fef74689c76ae55bc2f3e6d2f99623e686822a67cc3e4ed5b49fef"} Nov 26 08:41:34 crc kubenswrapper[4909]: I1126 08:41:34.188621 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-51b4-account-create-4dxrc" event={"ID":"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54","Type":"ContainerStarted","Data":"2fba7a2b836bd1c5605948eeacefcc91056b5d43a7cf81cb873bc0d2ddd69075"} Nov 26 08:41:34 crc kubenswrapper[4909]: I1126 08:41:34.520141 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf11ea3-34c1-4351-8799-24196560d05e" path="/var/lib/kubelet/pods/acf11ea3-34c1-4351-8799-24196560d05e/volumes" Nov 26 08:41:35 crc kubenswrapper[4909]: I1126 08:41:35.621765 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-51b4-account-create-4dxrc" Nov 26 08:41:35 crc kubenswrapper[4909]: I1126 08:41:35.743120 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5gmh\" (UniqueName: \"kubernetes.io/projected/c2e3c678-4a1f-4fa3-9e4d-cb414adcca54-kube-api-access-b5gmh\") pod \"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54\" (UID: \"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54\") " Nov 26 08:41:35 crc kubenswrapper[4909]: I1126 08:41:35.748813 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e3c678-4a1f-4fa3-9e4d-cb414adcca54-kube-api-access-b5gmh" (OuterVolumeSpecName: "kube-api-access-b5gmh") pod "c2e3c678-4a1f-4fa3-9e4d-cb414adcca54" (UID: "c2e3c678-4a1f-4fa3-9e4d-cb414adcca54"). InnerVolumeSpecName "kube-api-access-b5gmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:41:35 crc kubenswrapper[4909]: I1126 08:41:35.847154 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5gmh\" (UniqueName: \"kubernetes.io/projected/c2e3c678-4a1f-4fa3-9e4d-cb414adcca54-kube-api-access-b5gmh\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:36 crc kubenswrapper[4909]: I1126 08:41:36.215182 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-51b4-account-create-4dxrc" event={"ID":"c2e3c678-4a1f-4fa3-9e4d-cb414adcca54","Type":"ContainerDied","Data":"2fba7a2b836bd1c5605948eeacefcc91056b5d43a7cf81cb873bc0d2ddd69075"} Nov 26 08:41:36 crc kubenswrapper[4909]: I1126 08:41:36.215267 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fba7a2b836bd1c5605948eeacefcc91056b5d43a7cf81cb873bc0d2ddd69075" Nov 26 08:41:36 crc kubenswrapper[4909]: I1126 08:41:36.215848 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-51b4-account-create-4dxrc" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.118327 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-8v2j5"] Nov 26 08:41:38 crc kubenswrapper[4909]: E1126 08:41:38.123973 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e3c678-4a1f-4fa3-9e4d-cb414adcca54" containerName="mariadb-account-create" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.124008 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e3c678-4a1f-4fa3-9e4d-cb414adcca54" containerName="mariadb-account-create" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.124276 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e3c678-4a1f-4fa3-9e4d-cb414adcca54" containerName="mariadb-account-create" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.125200 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.128473 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.133016 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-wz7jf" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.136169 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-8v2j5"] Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.205051 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-combined-ca-bundle\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.205101 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-job-config-data\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.205419 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-config-data\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.205495 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww45f\" (UniqueName: \"kubernetes.io/projected/f4237001-afe3-49f8-84cd-6772277c3020-kube-api-access-ww45f\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.307760 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-config-data\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.307827 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww45f\" (UniqueName: \"kubernetes.io/projected/f4237001-afe3-49f8-84cd-6772277c3020-kube-api-access-ww45f\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.307998 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-combined-ca-bundle\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.308073 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-job-config-data\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.314096 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-config-data\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.314663 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-combined-ca-bundle\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.315140 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-job-config-data\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.334654 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww45f\" (UniqueName: \"kubernetes.io/projected/f4237001-afe3-49f8-84cd-6772277c3020-kube-api-access-ww45f\") pod \"manila-db-sync-8v2j5\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:38 crc kubenswrapper[4909]: I1126 08:41:38.461977 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:39 crc kubenswrapper[4909]: I1126 08:41:39.365507 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-8v2j5"] Nov 26 08:41:39 crc kubenswrapper[4909]: W1126 08:41:39.367937 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4237001_afe3_49f8_84cd_6772277c3020.slice/crio-374fe24df90092a6e4a028ecfc86bfc741ed35a2efd8fe30126d8707408ff271 WatchSource:0}: Error finding container 374fe24df90092a6e4a028ecfc86bfc741ed35a2efd8fe30126d8707408ff271: Status 404 returned error can't find the container with id 374fe24df90092a6e4a028ecfc86bfc741ed35a2efd8fe30126d8707408ff271 Nov 26 08:41:40 crc kubenswrapper[4909]: I1126 08:41:40.260031 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8v2j5" event={"ID":"f4237001-afe3-49f8-84cd-6772277c3020","Type":"ContainerStarted","Data":"374fe24df90092a6e4a028ecfc86bfc741ed35a2efd8fe30126d8707408ff271"} Nov 26 08:41:41 crc kubenswrapper[4909]: I1126 08:41:41.075076 4909 scope.go:117] "RemoveContainer" containerID="8c8190a1de854c52412c8e8a0336569605b98e73d49817e531e53eb1f6dd5af4" Nov 26 08:41:43 crc kubenswrapper[4909]: I1126 08:41:43.036104 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-79m9z"] Nov 26 08:41:43 crc kubenswrapper[4909]: I1126 08:41:43.052254 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-79m9z"] Nov 26 08:41:44 crc kubenswrapper[4909]: I1126 08:41:44.053420 4909 scope.go:117] "RemoveContainer" containerID="982b5f724b47ec9cc5a95d59f5a34c5c7a8f90869ea6645cbdf86e933ac70a10" Nov 26 08:41:44 crc kubenswrapper[4909]: I1126 08:41:44.095065 4909 scope.go:117] "RemoveContainer" containerID="1f068b9f42a9bc0fd12b58e4a4e280abc7924725741ea0ee414cf038c41d1b28" Nov 26 08:41:44 crc kubenswrapper[4909]: I1126 08:41:44.281624 4909 scope.go:117] "RemoveContainer" containerID="e5c3dc4167208114c5d6ff1321c86e6a285d9a8fd663e9ecd6d36ba6dbc6d67a" Nov 26 08:41:44 crc kubenswrapper[4909]: I1126 08:41:44.513523 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b" path="/var/lib/kubelet/pods/1f53c6ed-d2ac-4e96-8aa5-82427f3c3f7b/volumes" Nov 26 08:41:45 crc kubenswrapper[4909]: I1126 08:41:45.337352 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8v2j5" event={"ID":"f4237001-afe3-49f8-84cd-6772277c3020","Type":"ContainerStarted","Data":"b25932b4848b63e9f97254d2fe469f8e80fdf12eb8f008e65674b14d60a76466"} Nov 26 08:41:45 crc kubenswrapper[4909]: I1126 08:41:45.383313 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-8v2j5" podStartSLOduration=2.657130559 podStartE2EDuration="7.383285146s" podCreationTimestamp="2025-11-26 08:41:38 +0000 UTC" firstStartedPulling="2025-11-26 08:41:39.370603352 +0000 UTC m=+6071.516814518" lastFinishedPulling="2025-11-26 08:41:44.096757939 +0000 UTC m=+6076.242969105" observedRunningTime="2025-11-26 08:41:45.362221782 +0000 UTC m=+6077.508432978" watchObservedRunningTime="2025-11-26 08:41:45.383285146 +0000 UTC m=+6077.529496352" Nov 26 08:41:46 crc kubenswrapper[4909]: I1126 08:41:46.357672 4909 generic.go:334] "Generic (PLEG): container finished" podID="f4237001-afe3-49f8-84cd-6772277c3020" containerID="b25932b4848b63e9f97254d2fe469f8e80fdf12eb8f008e65674b14d60a76466" exitCode=0 Nov 26 08:41:46 crc kubenswrapper[4909]: I1126 08:41:46.357736 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8v2j5" event={"ID":"f4237001-afe3-49f8-84cd-6772277c3020","Type":"ContainerDied","Data":"b25932b4848b63e9f97254d2fe469f8e80fdf12eb8f008e65674b14d60a76466"} Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.839194 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.945373 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww45f\" (UniqueName: \"kubernetes.io/projected/f4237001-afe3-49f8-84cd-6772277c3020-kube-api-access-ww45f\") pod \"f4237001-afe3-49f8-84cd-6772277c3020\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.945717 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-combined-ca-bundle\") pod \"f4237001-afe3-49f8-84cd-6772277c3020\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.945783 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-job-config-data\") pod \"f4237001-afe3-49f8-84cd-6772277c3020\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.945820 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-config-data\") pod \"f4237001-afe3-49f8-84cd-6772277c3020\" (UID: \"f4237001-afe3-49f8-84cd-6772277c3020\") " Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.952204 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "f4237001-afe3-49f8-84cd-6772277c3020" (UID: "f4237001-afe3-49f8-84cd-6772277c3020"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.955512 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4237001-afe3-49f8-84cd-6772277c3020-kube-api-access-ww45f" (OuterVolumeSpecName: "kube-api-access-ww45f") pod "f4237001-afe3-49f8-84cd-6772277c3020" (UID: "f4237001-afe3-49f8-84cd-6772277c3020"). InnerVolumeSpecName "kube-api-access-ww45f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.977804 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-config-data" (OuterVolumeSpecName: "config-data") pod "f4237001-afe3-49f8-84cd-6772277c3020" (UID: "f4237001-afe3-49f8-84cd-6772277c3020"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:47 crc kubenswrapper[4909]: I1126 08:41:47.990759 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4237001-afe3-49f8-84cd-6772277c3020" (UID: "f4237001-afe3-49f8-84cd-6772277c3020"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.048564 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww45f\" (UniqueName: \"kubernetes.io/projected/f4237001-afe3-49f8-84cd-6772277c3020-kube-api-access-ww45f\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.048620 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.048634 4909 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.048645 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4237001-afe3-49f8-84cd-6772277c3020-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.385886 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-8v2j5" event={"ID":"f4237001-afe3-49f8-84cd-6772277c3020","Type":"ContainerDied","Data":"374fe24df90092a6e4a028ecfc86bfc741ed35a2efd8fe30126d8707408ff271"} Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.385939 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="374fe24df90092a6e4a028ecfc86bfc741ed35a2efd8fe30126d8707408ff271" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.386019 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-8v2j5" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.690545 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 26 08:41:48 crc kubenswrapper[4909]: E1126 08:41:48.691455 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4237001-afe3-49f8-84cd-6772277c3020" containerName="manila-db-sync" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.691492 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4237001-afe3-49f8-84cd-6772277c3020" containerName="manila-db-sync" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.691934 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4237001-afe3-49f8-84cd-6772277c3020" containerName="manila-db-sync" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.693988 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.697443 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.697522 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.697525 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-wz7jf" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.697568 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.700913 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.704272 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.706477 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.709721 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.712979 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.751692 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865612 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-scripts\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865666 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e6333d-03ca-438f-882f-b3415c11e3fc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865764 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-config-data\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865786 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5e6333d-03ca-438f-882f-b3415c11e3fc-ceph\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865862 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-scripts\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865892 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865941 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.865998 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.866030 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.866082 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83c9363e-4ca4-4b81-8470-651bdb6f7c28-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.866109 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d5e6333d-03ca-438f-882f-b3415c11e3fc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.866137 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-config-data\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.866163 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjtsw\" (UniqueName: \"kubernetes.io/projected/83c9363e-4ca4-4b81-8470-651bdb6f7c28-kube-api-access-bjtsw\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.866182 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8vjc\" (UniqueName: \"kubernetes.io/projected/d5e6333d-03ca-438f-882f-b3415c11e3fc-kube-api-access-v8vjc\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.919268 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7876bb76fc-p8ccd"] Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.921152 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.945033 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7876bb76fc-p8ccd"] Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.968899 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-scripts\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.968949 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.968980 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969018 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969048 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969082 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83c9363e-4ca4-4b81-8470-651bdb6f7c28-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969102 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d5e6333d-03ca-438f-882f-b3415c11e3fc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969131 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-config-data\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969161 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjtsw\" (UniqueName: \"kubernetes.io/projected/83c9363e-4ca4-4b81-8470-651bdb6f7c28-kube-api-access-bjtsw\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969176 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8vjc\" (UniqueName: \"kubernetes.io/projected/d5e6333d-03ca-438f-882f-b3415c11e3fc-kube-api-access-v8vjc\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969194 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e6333d-03ca-438f-882f-b3415c11e3fc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969210 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-scripts\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969257 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-config-data\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969274 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5e6333d-03ca-438f-882f-b3415c11e3fc-ceph\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.969741 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e6333d-03ca-438f-882f-b3415c11e3fc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.979208 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83c9363e-4ca4-4b81-8470-651bdb6f7c28-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.981714 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-scripts\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.982624 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.985688 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-scripts\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.986418 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d5e6333d-03ca-438f-882f-b3415c11e3fc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.986955 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5e6333d-03ca-438f-882f-b3415c11e3fc-ceph\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.987212 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-config-data\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.987687 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.991991 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.992380 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e6333d-03ca-438f-882f-b3415c11e3fc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:48 crc kubenswrapper[4909]: I1126 08:41:48.993097 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjtsw\" (UniqueName: \"kubernetes.io/projected/83c9363e-4ca4-4b81-8470-651bdb6f7c28-kube-api-access-bjtsw\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.003072 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c9363e-4ca4-4b81-8470-651bdb6f7c28-config-data\") pod \"manila-scheduler-0\" (UID: \"83c9363e-4ca4-4b81-8470-651bdb6f7c28\") " pod="openstack/manila-scheduler-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.004308 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8vjc\" (UniqueName: \"kubernetes.io/projected/d5e6333d-03ca-438f-882f-b3415c11e3fc-kube-api-access-v8vjc\") pod \"manila-share-share1-0\" (UID: \"d5e6333d-03ca-438f-882f-b3415c11e3fc\") " pod="openstack/manila-share-share1-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.015854 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.038166 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.071794 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.071919 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-nb\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.071970 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-dns-svc\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.071997 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfwsd\" (UniqueName: \"kubernetes.io/projected/c6c10900-1ca3-451b-8f45-4244f9f77701-kube-api-access-jfwsd\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.072070 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-config\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.118211 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.120655 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.123472 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.143121 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.175629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-config\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.175680 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.175775 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-nb\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.175819 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-dns-svc\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.175845 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfwsd\" (UniqueName: \"kubernetes.io/projected/c6c10900-1ca3-451b-8f45-4244f9f77701-kube-api-access-jfwsd\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.177217 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-config\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.177769 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.178270 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-nb\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.178786 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-dns-svc\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.195374 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfwsd\" (UniqueName: \"kubernetes.io/projected/c6c10900-1ca3-451b-8f45-4244f9f77701-kube-api-access-jfwsd\") pod \"dnsmasq-dns-7876bb76fc-p8ccd\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.258792 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.277731 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f3de1a-d37e-43b1-9882-2d0678d3839b-logs\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.277774 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-config-data-custom\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.277826 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcp7\" (UniqueName: \"kubernetes.io/projected/17f3de1a-d37e-43b1-9882-2d0678d3839b-kube-api-access-qmcp7\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.277859 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f3de1a-d37e-43b1-9882-2d0678d3839b-etc-machine-id\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.277889 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-scripts\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.277987 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-config-data\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.278013 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.403849 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmcp7\" (UniqueName: \"kubernetes.io/projected/17f3de1a-d37e-43b1-9882-2d0678d3839b-kube-api-access-qmcp7\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.404059 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f3de1a-d37e-43b1-9882-2d0678d3839b-etc-machine-id\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.404104 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-scripts\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.404225 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-config-data\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.404255 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.404289 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f3de1a-d37e-43b1-9882-2d0678d3839b-logs\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.404287 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f3de1a-d37e-43b1-9882-2d0678d3839b-etc-machine-id\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.404304 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-config-data-custom\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.407322 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f3de1a-d37e-43b1-9882-2d0678d3839b-logs\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.412443 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-config-data\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.413954 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-scripts\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.414343 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.417569 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f3de1a-d37e-43b1-9882-2d0678d3839b-config-data-custom\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.434739 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmcp7\" (UniqueName: \"kubernetes.io/projected/17f3de1a-d37e-43b1-9882-2d0678d3839b-kube-api-access-qmcp7\") pod \"manila-api-0\" (UID: \"17f3de1a-d37e-43b1-9882-2d0678d3839b\") " pod="openstack/manila-api-0" Nov 26 08:41:49 crc kubenswrapper[4909]: I1126 08:41:49.605023 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 26 08:41:50 crc kubenswrapper[4909]: I1126 08:41:50.539441 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 26 08:41:50 crc kubenswrapper[4909]: I1126 08:41:50.849140 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7876bb76fc-p8ccd"] Nov 26 08:41:50 crc kubenswrapper[4909]: I1126 08:41:50.911752 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 26 08:41:51 crc kubenswrapper[4909]: I1126 08:41:51.022607 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 26 08:41:51 crc kubenswrapper[4909]: I1126 08:41:51.509958 4909 generic.go:334] "Generic (PLEG): container finished" podID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerID="2888b2b95ce62a48dff91d9ba3d6d072185eae537e43b002c8b5a91743c6ff95" exitCode=0 Nov 26 08:41:51 crc kubenswrapper[4909]: I1126 08:41:51.510090 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" event={"ID":"c6c10900-1ca3-451b-8f45-4244f9f77701","Type":"ContainerDied","Data":"2888b2b95ce62a48dff91d9ba3d6d072185eae537e43b002c8b5a91743c6ff95"} Nov 26 08:41:51 crc kubenswrapper[4909]: I1126 08:41:51.510502 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" event={"ID":"c6c10900-1ca3-451b-8f45-4244f9f77701","Type":"ContainerStarted","Data":"74bea8bf3d44990d72f66d7af2245c21bb3f9ec0eaf129adfc029fb30becfa6e"} Nov 26 08:41:51 crc kubenswrapper[4909]: I1126 08:41:51.520640 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"17f3de1a-d37e-43b1-9882-2d0678d3839b","Type":"ContainerStarted","Data":"ac6cc4ad336d190680c8ea6edadd1fa038397d85e862f4bb9b4143ceed17a075"} Nov 26 08:41:51 crc kubenswrapper[4909]: I1126 08:41:51.526235 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83c9363e-4ca4-4b81-8470-651bdb6f7c28","Type":"ContainerStarted","Data":"f02cac9368388753f58d1ffe51eadd00a0335ac75fac29b213037526cd25b55f"} Nov 26 08:41:51 crc kubenswrapper[4909]: I1126 08:41:51.529641 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d5e6333d-03ca-438f-882f-b3415c11e3fc","Type":"ContainerStarted","Data":"b03bfa5c221c2b4bfaa32e2ca02b1f2036637d98b68a385765e2ff0de6da08c5"} Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.542189 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" event={"ID":"c6c10900-1ca3-451b-8f45-4244f9f77701","Type":"ContainerStarted","Data":"e1017534567219d1b6d14e398b45686f79f41c26aaf6ed56f8c71d40291e8e64"} Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.542804 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.545480 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"17f3de1a-d37e-43b1-9882-2d0678d3839b","Type":"ContainerStarted","Data":"5b266da2a0105fbd72f8dc9333c52a760e698907fc8b53f772244b26f89e5108"} Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.545509 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"17f3de1a-d37e-43b1-9882-2d0678d3839b","Type":"ContainerStarted","Data":"7c3c4cd579ad917b31637081fd122d6d4f0b84ea560b5683779f79f0b6cdb014"} Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.545626 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.550117 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83c9363e-4ca4-4b81-8470-651bdb6f7c28","Type":"ContainerStarted","Data":"64b824ed1fdde1bd3596aebf6203bd86d45a80c161baf9dbeee15305814f5150"} Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.550144 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83c9363e-4ca4-4b81-8470-651bdb6f7c28","Type":"ContainerStarted","Data":"4da07264026bcd1fe1a45ab6a45c43ea293a3123c2ae0c3ccd8e6977d053c978"} Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.563854 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" podStartSLOduration=4.563835803 podStartE2EDuration="4.563835803s" podCreationTimestamp="2025-11-26 08:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:41:52.560202694 +0000 UTC m=+6084.706413850" watchObservedRunningTime="2025-11-26 08:41:52.563835803 +0000 UTC m=+6084.710046969" Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.583641 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.583622692 podStartE2EDuration="3.583622692s" podCreationTimestamp="2025-11-26 08:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:41:52.575053428 +0000 UTC m=+6084.721264594" watchObservedRunningTime="2025-11-26 08:41:52.583622692 +0000 UTC m=+6084.729833848" Nov 26 08:41:52 crc kubenswrapper[4909]: I1126 08:41:52.600149 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.910837559 podStartE2EDuration="4.600125711s" podCreationTimestamp="2025-11-26 08:41:48 +0000 UTC" firstStartedPulling="2025-11-26 08:41:50.542378829 +0000 UTC m=+6082.688589995" lastFinishedPulling="2025-11-26 08:41:51.231666991 +0000 UTC m=+6083.377878147" observedRunningTime="2025-11-26 08:41:52.597560961 +0000 UTC m=+6084.743772137" watchObservedRunningTime="2025-11-26 08:41:52.600125711 +0000 UTC m=+6084.746336877" Nov 26 08:41:57 crc kubenswrapper[4909]: I1126 08:41:57.641769 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d5e6333d-03ca-438f-882f-b3415c11e3fc","Type":"ContainerStarted","Data":"84dd0f9b516a0e4fac463695453135edc4dfaf71586c7bf9867e5e0fcab275fc"} Nov 26 08:41:58 crc kubenswrapper[4909]: I1126 08:41:58.661690 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d5e6333d-03ca-438f-882f-b3415c11e3fc","Type":"ContainerStarted","Data":"8432fa863741cf2a1ef2fe0c71131feabb79522af4d81ac987b2a6a823d82380"} Nov 26 08:41:58 crc kubenswrapper[4909]: I1126 08:41:58.693955 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.96981498 podStartE2EDuration="10.693936357s" podCreationTimestamp="2025-11-26 08:41:48 +0000 UTC" firstStartedPulling="2025-11-26 08:41:50.899439383 +0000 UTC m=+6083.045650559" lastFinishedPulling="2025-11-26 08:41:56.62356077 +0000 UTC m=+6088.769771936" observedRunningTime="2025-11-26 08:41:58.689211558 +0000 UTC m=+6090.835422724" watchObservedRunningTime="2025-11-26 08:41:58.693936357 +0000 UTC m=+6090.840147523" Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.016796 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.038696 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.261738 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.387594 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7784748f7f-s6hvz"] Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.388287 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" podUID="ca293453-2173-43be-a1cc-a7da8c47f256" containerName="dnsmasq-dns" containerID="cri-o://201c9a16c55228024ecb09c501c43455b4410a46d06abbac683c977662ed7d1b" gracePeriod=10 Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.675699 4909 generic.go:334] "Generic (PLEG): container finished" podID="ca293453-2173-43be-a1cc-a7da8c47f256" containerID="201c9a16c55228024ecb09c501c43455b4410a46d06abbac683c977662ed7d1b" exitCode=0 Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.677190 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" event={"ID":"ca293453-2173-43be-a1cc-a7da8c47f256","Type":"ContainerDied","Data":"201c9a16c55228024ecb09c501c43455b4410a46d06abbac683c977662ed7d1b"} Nov 26 08:41:59 crc kubenswrapper[4909]: I1126 08:41:59.925100 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.068051 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv9dw\" (UniqueName: \"kubernetes.io/projected/ca293453-2173-43be-a1cc-a7da8c47f256-kube-api-access-fv9dw\") pod \"ca293453-2173-43be-a1cc-a7da8c47f256\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.068106 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-config\") pod \"ca293453-2173-43be-a1cc-a7da8c47f256\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.068139 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-nb\") pod \"ca293453-2173-43be-a1cc-a7da8c47f256\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.068253 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-sb\") pod \"ca293453-2173-43be-a1cc-a7da8c47f256\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.068279 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-dns-svc\") pod \"ca293453-2173-43be-a1cc-a7da8c47f256\" (UID: \"ca293453-2173-43be-a1cc-a7da8c47f256\") " Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.077573 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca293453-2173-43be-a1cc-a7da8c47f256-kube-api-access-fv9dw" (OuterVolumeSpecName: "kube-api-access-fv9dw") pod "ca293453-2173-43be-a1cc-a7da8c47f256" (UID: "ca293453-2173-43be-a1cc-a7da8c47f256"). InnerVolumeSpecName "kube-api-access-fv9dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.126565 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca293453-2173-43be-a1cc-a7da8c47f256" (UID: "ca293453-2173-43be-a1cc-a7da8c47f256"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.126597 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca293453-2173-43be-a1cc-a7da8c47f256" (UID: "ca293453-2173-43be-a1cc-a7da8c47f256"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.126945 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca293453-2173-43be-a1cc-a7da8c47f256" (UID: "ca293453-2173-43be-a1cc-a7da8c47f256"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.144006 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-config" (OuterVolumeSpecName: "config") pod "ca293453-2173-43be-a1cc-a7da8c47f256" (UID: "ca293453-2173-43be-a1cc-a7da8c47f256"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.171007 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.171041 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.171051 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv9dw\" (UniqueName: \"kubernetes.io/projected/ca293453-2173-43be-a1cc-a7da8c47f256-kube-api-access-fv9dw\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.171061 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.171073 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca293453-2173-43be-a1cc-a7da8c47f256-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.688976 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.689117 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7784748f7f-s6hvz" event={"ID":"ca293453-2173-43be-a1cc-a7da8c47f256","Type":"ContainerDied","Data":"93f354b7fdb0efc2eca3e5d46cc1c3f3f3af48e971ef47a2b0a0be915b5b32ce"} Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.689313 4909 scope.go:117] "RemoveContainer" containerID="201c9a16c55228024ecb09c501c43455b4410a46d06abbac683c977662ed7d1b" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.712107 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7784748f7f-s6hvz"] Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.718525 4909 scope.go:117] "RemoveContainer" containerID="fe88b164336357d8d81c6d20cd5d3123feeab49995c9dcd369e89a46f755ad48" Nov 26 08:42:00 crc kubenswrapper[4909]: I1126 08:42:00.720351 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7784748f7f-s6hvz"] Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.467070 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.467387 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-central-agent" containerID="cri-o://10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce" gracePeriod=30 Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.467801 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="proxy-httpd" containerID="cri-o://46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5" gracePeriod=30 Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.467849 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="sg-core" containerID="cri-o://8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4" gracePeriod=30 Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.467880 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-notification-agent" containerID="cri-o://59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484" gracePeriod=30 Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.708012 4909 generic.go:334] "Generic (PLEG): container finished" podID="27e4c6ae-206c-49af-9994-4ae242693621" containerID="46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5" exitCode=0 Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.708348 4909 generic.go:334] "Generic (PLEG): container finished" podID="27e4c6ae-206c-49af-9994-4ae242693621" containerID="8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4" exitCode=2 Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.708099 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerDied","Data":"46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5"} Nov 26 08:42:01 crc kubenswrapper[4909]: I1126 08:42:01.708405 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerDied","Data":"8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4"} Nov 26 08:42:02 crc kubenswrapper[4909]: I1126 08:42:02.510080 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca293453-2173-43be-a1cc-a7da8c47f256" path="/var/lib/kubelet/pods/ca293453-2173-43be-a1cc-a7da8c47f256/volumes" Nov 26 08:42:02 crc kubenswrapper[4909]: I1126 08:42:02.725514 4909 generic.go:334] "Generic (PLEG): container finished" podID="27e4c6ae-206c-49af-9994-4ae242693621" containerID="10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce" exitCode=0 Nov 26 08:42:02 crc kubenswrapper[4909]: I1126 08:42:02.725632 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerDied","Data":"10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce"} Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.540857 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.661280 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-combined-ca-bundle\") pod \"27e4c6ae-206c-49af-9994-4ae242693621\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.661580 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-run-httpd\") pod \"27e4c6ae-206c-49af-9994-4ae242693621\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.661679 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-config-data\") pod \"27e4c6ae-206c-49af-9994-4ae242693621\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.661695 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-log-httpd\") pod \"27e4c6ae-206c-49af-9994-4ae242693621\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.661763 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-sg-core-conf-yaml\") pod \"27e4c6ae-206c-49af-9994-4ae242693621\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.661855 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-scripts\") pod \"27e4c6ae-206c-49af-9994-4ae242693621\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.661921 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdgt4\" (UniqueName: \"kubernetes.io/projected/27e4c6ae-206c-49af-9994-4ae242693621-kube-api-access-rdgt4\") pod \"27e4c6ae-206c-49af-9994-4ae242693621\" (UID: \"27e4c6ae-206c-49af-9994-4ae242693621\") " Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.662584 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27e4c6ae-206c-49af-9994-4ae242693621" (UID: "27e4c6ae-206c-49af-9994-4ae242693621"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.662734 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27e4c6ae-206c-49af-9994-4ae242693621" (UID: "27e4c6ae-206c-49af-9994-4ae242693621"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.667079 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e4c6ae-206c-49af-9994-4ae242693621-kube-api-access-rdgt4" (OuterVolumeSpecName: "kube-api-access-rdgt4") pod "27e4c6ae-206c-49af-9994-4ae242693621" (UID: "27e4c6ae-206c-49af-9994-4ae242693621"). InnerVolumeSpecName "kube-api-access-rdgt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.678993 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-scripts" (OuterVolumeSpecName: "scripts") pod "27e4c6ae-206c-49af-9994-4ae242693621" (UID: "27e4c6ae-206c-49af-9994-4ae242693621"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.695574 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27e4c6ae-206c-49af-9994-4ae242693621" (UID: "27e4c6ae-206c-49af-9994-4ae242693621"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.740150 4909 generic.go:334] "Generic (PLEG): container finished" podID="27e4c6ae-206c-49af-9994-4ae242693621" containerID="59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484" exitCode=0 Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.740218 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.740224 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerDied","Data":"59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484"} Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.740291 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e4c6ae-206c-49af-9994-4ae242693621","Type":"ContainerDied","Data":"f6407e62495f22c6741016d7d2990033e754f47eb80906be7c8dbe5264cb8c9e"} Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.740313 4909 scope.go:117] "RemoveContainer" containerID="46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.764043 4909 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.764079 4909 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e4c6ae-206c-49af-9994-4ae242693621-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.764095 4909 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.764109 4909 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.764120 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdgt4\" (UniqueName: \"kubernetes.io/projected/27e4c6ae-206c-49af-9994-4ae242693621-kube-api-access-rdgt4\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.771687 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-config-data" (OuterVolumeSpecName: "config-data") pod "27e4c6ae-206c-49af-9994-4ae242693621" (UID: "27e4c6ae-206c-49af-9994-4ae242693621"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.775117 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27e4c6ae-206c-49af-9994-4ae242693621" (UID: "27e4c6ae-206c-49af-9994-4ae242693621"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.775466 4909 scope.go:117] "RemoveContainer" containerID="8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.794353 4909 scope.go:117] "RemoveContainer" containerID="59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.817757 4909 scope.go:117] "RemoveContainer" containerID="10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.841945 4909 scope.go:117] "RemoveContainer" containerID="46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5" Nov 26 08:42:03 crc kubenswrapper[4909]: E1126 08:42:03.843853 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5\": container with ID starting with 46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5 not found: ID does not exist" containerID="46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.843896 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5"} err="failed to get container status \"46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5\": rpc error: code = NotFound desc = could not find container \"46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5\": container with ID starting with 46914c6562bd589302e4c9c6a7179f158664d739603c4b68a7e1b6b4003b7fe5 not found: ID does not exist" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.843929 4909 scope.go:117] "RemoveContainer" containerID="8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4" Nov 26 08:42:03 crc kubenswrapper[4909]: E1126 08:42:03.844415 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4\": container with ID starting with 8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4 not found: ID does not exist" containerID="8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.844438 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4"} err="failed to get container status \"8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4\": rpc error: code = NotFound desc = could not find container \"8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4\": container with ID starting with 8efeff5dbb13f6c9d59c399bcad7d91b3b865896a4d1a3eeca8a8b8915e905d4 not found: ID does not exist" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.844453 4909 scope.go:117] "RemoveContainer" containerID="59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484" Nov 26 08:42:03 crc kubenswrapper[4909]: E1126 08:42:03.844770 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484\": container with ID starting with 59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484 not found: ID does not exist" containerID="59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.844789 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484"} err="failed to get container status \"59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484\": rpc error: code = NotFound desc = could not find container \"59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484\": container with ID starting with 59df0695311fcd2b9fed23dbaa2eacb10cfbf47699253ec62a9f957f79d7a484 not found: ID does not exist" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.844805 4909 scope.go:117] "RemoveContainer" containerID="10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce" Nov 26 08:42:03 crc kubenswrapper[4909]: E1126 08:42:03.845186 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce\": container with ID starting with 10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce not found: ID does not exist" containerID="10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.845227 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce"} err="failed to get container status \"10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce\": rpc error: code = NotFound desc = could not find container \"10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce\": container with ID starting with 10e96dbbeef2255fe4aae4e2de7bee29e5341a5e25fd6cbf9a37e0124bea14ce not found: ID does not exist" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.868968 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:03 crc kubenswrapper[4909]: I1126 08:42:03.868997 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e4c6ae-206c-49af-9994-4ae242693621-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.085462 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.100875 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.122188 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:42:04 crc kubenswrapper[4909]: E1126 08:42:04.122807 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-notification-agent" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.122839 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-notification-agent" Nov 26 08:42:04 crc kubenswrapper[4909]: E1126 08:42:04.122860 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca293453-2173-43be-a1cc-a7da8c47f256" containerName="dnsmasq-dns" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.122872 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca293453-2173-43be-a1cc-a7da8c47f256" containerName="dnsmasq-dns" Nov 26 08:42:04 crc kubenswrapper[4909]: E1126 08:42:04.122895 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-central-agent" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.122908 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-central-agent" Nov 26 08:42:04 crc kubenswrapper[4909]: E1126 08:42:04.122936 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="sg-core" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.122948 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="sg-core" Nov 26 08:42:04 crc kubenswrapper[4909]: E1126 08:42:04.122976 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca293453-2173-43be-a1cc-a7da8c47f256" containerName="init" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.122989 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca293453-2173-43be-a1cc-a7da8c47f256" containerName="init" Nov 26 08:42:04 crc kubenswrapper[4909]: E1126 08:42:04.123025 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="proxy-httpd" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.123035 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="proxy-httpd" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.123382 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-notification-agent" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.123423 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="ceilometer-central-agent" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.123457 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="proxy-httpd" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.123499 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca293453-2173-43be-a1cc-a7da8c47f256" containerName="dnsmasq-dns" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.123524 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e4c6ae-206c-49af-9994-4ae242693621" containerName="sg-core" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.126972 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.130330 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.132440 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.136449 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.277096 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.277182 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qzvn\" (UniqueName: \"kubernetes.io/projected/92b4fe15-bb71-47cc-8560-763176a1a666-kube-api-access-7qzvn\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.277204 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92b4fe15-bb71-47cc-8560-763176a1a666-log-httpd\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.277220 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-scripts\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.277263 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-config-data\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.277305 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92b4fe15-bb71-47cc-8560-763176a1a666-run-httpd\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.277363 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.378888 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92b4fe15-bb71-47cc-8560-763176a1a666-run-httpd\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.378957 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.379041 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.379656 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92b4fe15-bb71-47cc-8560-763176a1a666-run-httpd\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.379889 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qzvn\" (UniqueName: \"kubernetes.io/projected/92b4fe15-bb71-47cc-8560-763176a1a666-kube-api-access-7qzvn\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.379967 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-scripts\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.379985 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92b4fe15-bb71-47cc-8560-763176a1a666-log-httpd\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.380104 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-config-data\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.380395 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92b4fe15-bb71-47cc-8560-763176a1a666-log-httpd\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.385522 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-config-data\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.387460 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-scripts\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.387634 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.387952 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92b4fe15-bb71-47cc-8560-763176a1a666-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.397380 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qzvn\" (UniqueName: \"kubernetes.io/projected/92b4fe15-bb71-47cc-8560-763176a1a666-kube-api-access-7qzvn\") pod \"ceilometer-0\" (UID: \"92b4fe15-bb71-47cc-8560-763176a1a666\") " pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.484076 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.533062 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e4c6ae-206c-49af-9994-4ae242693621" path="/var/lib/kubelet/pods/27e4c6ae-206c-49af-9994-4ae242693621/volumes" Nov 26 08:42:04 crc kubenswrapper[4909]: I1126 08:42:04.989663 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 08:42:05 crc kubenswrapper[4909]: I1126 08:42:05.768798 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92b4fe15-bb71-47cc-8560-763176a1a666","Type":"ContainerStarted","Data":"38d3b20124de6fde376385a306f984eb4873c29b9469e86390b85c0cdd0515cd"} Nov 26 08:42:06 crc kubenswrapper[4909]: I1126 08:42:06.782244 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92b4fe15-bb71-47cc-8560-763176a1a666","Type":"ContainerStarted","Data":"8b952e537f65a41324c0057d0c07bf0c464d637ca6e516d8693e6a5879ecdf39"} Nov 26 08:42:06 crc kubenswrapper[4909]: I1126 08:42:06.782647 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92b4fe15-bb71-47cc-8560-763176a1a666","Type":"ContainerStarted","Data":"2bbe11a73c1c932b51f6baa87c9cc277b03c06f48bc6803140ecbdb32abc0b2b"} Nov 26 08:42:07 crc kubenswrapper[4909]: I1126 08:42:07.300651 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:42:07 crc kubenswrapper[4909]: I1126 08:42:07.300987 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:42:07 crc kubenswrapper[4909]: I1126 08:42:07.797402 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92b4fe15-bb71-47cc-8560-763176a1a666","Type":"ContainerStarted","Data":"15a9ad5598fef2c56fceb7cbd47f69f9a76616f416d091d8dbd4395c74d1f626"} Nov 26 08:42:08 crc kubenswrapper[4909]: I1126 08:42:08.812398 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92b4fe15-bb71-47cc-8560-763176a1a666","Type":"ContainerStarted","Data":"fe32bbbbde0fab59d3031a74da82acf71fd7a401b569af46d10a01c1a10cd156"} Nov 26 08:42:08 crc kubenswrapper[4909]: I1126 08:42:08.812759 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 08:42:08 crc kubenswrapper[4909]: I1126 08:42:08.856403 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.505549172 podStartE2EDuration="4.856384121s" podCreationTimestamp="2025-11-26 08:42:04 +0000 UTC" firstStartedPulling="2025-11-26 08:42:05.000113466 +0000 UTC m=+6097.146324712" lastFinishedPulling="2025-11-26 08:42:08.350948485 +0000 UTC m=+6100.497159661" observedRunningTime="2025-11-26 08:42:08.840825217 +0000 UTC m=+6100.987036383" watchObservedRunningTime="2025-11-26 08:42:08.856384121 +0000 UTC m=+6101.002595307" Nov 26 08:42:10 crc kubenswrapper[4909]: I1126 08:42:10.578691 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 26 08:42:10 crc kubenswrapper[4909]: I1126 08:42:10.605443 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 26 08:42:10 crc kubenswrapper[4909]: I1126 08:42:10.963055 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 26 08:42:34 crc kubenswrapper[4909]: I1126 08:42:34.490903 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 26 08:42:37 crc kubenswrapper[4909]: I1126 08:42:37.301583 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:42:37 crc kubenswrapper[4909]: I1126 08:42:37.302221 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:42:44 crc kubenswrapper[4909]: I1126 08:42:44.448293 4909 scope.go:117] "RemoveContainer" containerID="7f810459f6eef7e1cfaee2f920bada1fe149752d8588ce07282dc1875e677788" Nov 26 08:42:57 crc kubenswrapper[4909]: I1126 08:42:57.919185 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65f77b9c99-l2nk7"] Nov 26 08:42:57 crc kubenswrapper[4909]: I1126 08:42:57.921901 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:57 crc kubenswrapper[4909]: I1126 08:42:57.932009 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 26 08:42:57 crc kubenswrapper[4909]: I1126 08:42:57.948960 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f77b9c99-l2nk7"] Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.022217 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-dns-svc\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.022389 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-openstack-cell1\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.022646 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-config\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.022798 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-sb\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.022918 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-nb\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.024104 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhrf\" (UniqueName: \"kubernetes.io/projected/44aedccc-414c-4d23-9d07-887b803b1b02-kube-api-access-djhrf\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.126404 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhrf\" (UniqueName: \"kubernetes.io/projected/44aedccc-414c-4d23-9d07-887b803b1b02-kube-api-access-djhrf\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.126470 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-dns-svc\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.126499 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-openstack-cell1\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.126538 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-config\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.126574 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-sb\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.126629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-nb\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.127795 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-sb\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.127842 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-nb\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.127842 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-dns-svc\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.128033 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-openstack-cell1\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.128517 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-config\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.169494 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhrf\" (UniqueName: \"kubernetes.io/projected/44aedccc-414c-4d23-9d07-887b803b1b02-kube-api-access-djhrf\") pod \"dnsmasq-dns-65f77b9c99-l2nk7\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.252178 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:42:58 crc kubenswrapper[4909]: I1126 08:42:58.788846 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f77b9c99-l2nk7"] Nov 26 08:42:59 crc kubenswrapper[4909]: I1126 08:42:59.450784 4909 generic.go:334] "Generic (PLEG): container finished" podID="44aedccc-414c-4d23-9d07-887b803b1b02" containerID="83000b56c4dc085d5749411a175113c7e4b829c9d9885148bfcf7692fe2b58f6" exitCode=0 Nov 26 08:42:59 crc kubenswrapper[4909]: I1126 08:42:59.450839 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" event={"ID":"44aedccc-414c-4d23-9d07-887b803b1b02","Type":"ContainerDied","Data":"83000b56c4dc085d5749411a175113c7e4b829c9d9885148bfcf7692fe2b58f6"} Nov 26 08:42:59 crc kubenswrapper[4909]: I1126 08:42:59.451276 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" event={"ID":"44aedccc-414c-4d23-9d07-887b803b1b02","Type":"ContainerStarted","Data":"bbdaca647bd0effce51b6d49db76ecca758c3ae1c27f6de9f44ea6da2cde7967"} Nov 26 08:43:00 crc kubenswrapper[4909]: I1126 08:43:00.465435 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" event={"ID":"44aedccc-414c-4d23-9d07-887b803b1b02","Type":"ContainerStarted","Data":"b3f8e7c62d41817866e2f4e1ac053223805d420e1c1b0fe65345c7fbdc70f781"} Nov 26 08:43:00 crc kubenswrapper[4909]: I1126 08:43:00.466678 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:43:00 crc kubenswrapper[4909]: I1126 08:43:00.497144 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" podStartSLOduration=3.497122209 podStartE2EDuration="3.497122209s" podCreationTimestamp="2025-11-26 08:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:43:00.489881112 +0000 UTC m=+6152.636092288" watchObservedRunningTime="2025-11-26 08:43:00.497122209 +0000 UTC m=+6152.643333385" Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.301722 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.302372 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.302441 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.303577 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffb78bc615b04ce4741527e4e6ccf051dacb1f8a8314365cec3e038b02b7715e"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.303865 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://ffb78bc615b04ce4741527e4e6ccf051dacb1f8a8314365cec3e038b02b7715e" gracePeriod=600 Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.556822 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="ffb78bc615b04ce4741527e4e6ccf051dacb1f8a8314365cec3e038b02b7715e" exitCode=0 Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.556931 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"ffb78bc615b04ce4741527e4e6ccf051dacb1f8a8314365cec3e038b02b7715e"} Nov 26 08:43:07 crc kubenswrapper[4909]: I1126 08:43:07.557183 4909 scope.go:117] "RemoveContainer" containerID="d755dbce77f249a2650f6bbbb00a683b207f5e1fc24bf1491e49824bec0cc0ed" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.255484 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.325432 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7876bb76fc-p8ccd"] Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.325798 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" podUID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerName="dnsmasq-dns" containerID="cri-o://e1017534567219d1b6d14e398b45686f79f41c26aaf6ed56f8c71d40291e8e64" gracePeriod=10 Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.540929 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df8f9c6bc-25vnv"] Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.543174 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df8f9c6bc-25vnv"] Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.543265 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.586070 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622"} Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.589901 4909 generic.go:334] "Generic (PLEG): container finished" podID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerID="e1017534567219d1b6d14e398b45686f79f41c26aaf6ed56f8c71d40291e8e64" exitCode=0 Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.589924 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" event={"ID":"c6c10900-1ca3-451b-8f45-4244f9f77701","Type":"ContainerDied","Data":"e1017534567219d1b6d14e398b45686f79f41c26aaf6ed56f8c71d40291e8e64"} Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.701213 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-openstack-cell1\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.701426 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-ovsdbserver-sb\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.701462 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-config\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.701507 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-ovsdbserver-nb\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.701814 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-dns-svc\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.701918 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2w45\" (UniqueName: \"kubernetes.io/projected/0fa8db2c-a313-4764-abb5-3741865c6112-kube-api-access-r2w45\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.805366 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2w45\" (UniqueName: \"kubernetes.io/projected/0fa8db2c-a313-4764-abb5-3741865c6112-kube-api-access-r2w45\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.805518 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-openstack-cell1\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.805553 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-ovsdbserver-sb\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.805576 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-config\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.805609 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-ovsdbserver-nb\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.805772 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-dns-svc\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.806837 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-dns-svc\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.809248 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-ovsdbserver-sb\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.809680 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-config\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.809910 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-ovsdbserver-nb\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.809931 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa8db2c-a313-4764-abb5-3741865c6112-openstack-cell1\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.833323 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2w45\" (UniqueName: \"kubernetes.io/projected/0fa8db2c-a313-4764-abb5-3741865c6112-kube-api-access-r2w45\") pod \"dnsmasq-dns-df8f9c6bc-25vnv\" (UID: \"0fa8db2c-a313-4764-abb5-3741865c6112\") " pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:08 crc kubenswrapper[4909]: I1126 08:43:08.870073 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.008967 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.112019 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-dns-svc\") pod \"c6c10900-1ca3-451b-8f45-4244f9f77701\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.112466 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-nb\") pod \"c6c10900-1ca3-451b-8f45-4244f9f77701\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.112507 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfwsd\" (UniqueName: \"kubernetes.io/projected/c6c10900-1ca3-451b-8f45-4244f9f77701-kube-api-access-jfwsd\") pod \"c6c10900-1ca3-451b-8f45-4244f9f77701\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.112575 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb\") pod \"c6c10900-1ca3-451b-8f45-4244f9f77701\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.112651 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-config\") pod \"c6c10900-1ca3-451b-8f45-4244f9f77701\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.124800 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c10900-1ca3-451b-8f45-4244f9f77701-kube-api-access-jfwsd" (OuterVolumeSpecName: "kube-api-access-jfwsd") pod "c6c10900-1ca3-451b-8f45-4244f9f77701" (UID: "c6c10900-1ca3-451b-8f45-4244f9f77701"). InnerVolumeSpecName "kube-api-access-jfwsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.211451 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6c10900-1ca3-451b-8f45-4244f9f77701" (UID: "c6c10900-1ca3-451b-8f45-4244f9f77701"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.214339 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6c10900-1ca3-451b-8f45-4244f9f77701" (UID: "c6c10900-1ca3-451b-8f45-4244f9f77701"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.214826 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb\") pod \"c6c10900-1ca3-451b-8f45-4244f9f77701\" (UID: \"c6c10900-1ca3-451b-8f45-4244f9f77701\") " Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.216234 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6c10900-1ca3-451b-8f45-4244f9f77701" (UID: "c6c10900-1ca3-451b-8f45-4244f9f77701"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:09 crc kubenswrapper[4909]: W1126 08:43:09.216397 4909 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c6c10900-1ca3-451b-8f45-4244f9f77701/volumes/kubernetes.io~configmap/ovsdbserver-sb Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.216410 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6c10900-1ca3-451b-8f45-4244f9f77701" (UID: "c6c10900-1ca3-451b-8f45-4244f9f77701"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.217106 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.217124 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.217138 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfwsd\" (UniqueName: \"kubernetes.io/projected/c6c10900-1ca3-451b-8f45-4244f9f77701-kube-api-access-jfwsd\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.217148 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.251238 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-config" (OuterVolumeSpecName: "config") pod "c6c10900-1ca3-451b-8f45-4244f9f77701" (UID: "c6c10900-1ca3-451b-8f45-4244f9f77701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.318982 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c10900-1ca3-451b-8f45-4244f9f77701-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.393502 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df8f9c6bc-25vnv"] Nov 26 08:43:09 crc kubenswrapper[4909]: W1126 08:43:09.394650 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fa8db2c_a313_4764_abb5_3741865c6112.slice/crio-71f26e1699dfbe9512b9126708c3cba500dac5b3197d35f0ab304f121199a17d WatchSource:0}: Error finding container 71f26e1699dfbe9512b9126708c3cba500dac5b3197d35f0ab304f121199a17d: Status 404 returned error can't find the container with id 71f26e1699dfbe9512b9126708c3cba500dac5b3197d35f0ab304f121199a17d Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.608449 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" event={"ID":"0fa8db2c-a313-4764-abb5-3741865c6112","Type":"ContainerStarted","Data":"71f26e1699dfbe9512b9126708c3cba500dac5b3197d35f0ab304f121199a17d"} Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.611357 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" event={"ID":"c6c10900-1ca3-451b-8f45-4244f9f77701","Type":"ContainerDied","Data":"74bea8bf3d44990d72f66d7af2245c21bb3f9ec0eaf129adfc029fb30becfa6e"} Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.611422 4909 scope.go:117] "RemoveContainer" containerID="e1017534567219d1b6d14e398b45686f79f41c26aaf6ed56f8c71d40291e8e64" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.611424 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7876bb76fc-p8ccd" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.653186 4909 scope.go:117] "RemoveContainer" containerID="2888b2b95ce62a48dff91d9ba3d6d072185eae537e43b002c8b5a91743c6ff95" Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.662067 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7876bb76fc-p8ccd"] Nov 26 08:43:09 crc kubenswrapper[4909]: I1126 08:43:09.668574 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7876bb76fc-p8ccd"] Nov 26 08:43:10 crc kubenswrapper[4909]: I1126 08:43:10.511704 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c10900-1ca3-451b-8f45-4244f9f77701" path="/var/lib/kubelet/pods/c6c10900-1ca3-451b-8f45-4244f9f77701/volumes" Nov 26 08:43:10 crc kubenswrapper[4909]: I1126 08:43:10.623679 4909 generic.go:334] "Generic (PLEG): container finished" podID="0fa8db2c-a313-4764-abb5-3741865c6112" containerID="89c60a9c1b946bfc5722fd23b375f98d5034197ff03babbcfd2ad1da6150b5bb" exitCode=0 Nov 26 08:43:10 crc kubenswrapper[4909]: I1126 08:43:10.623729 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" event={"ID":"0fa8db2c-a313-4764-abb5-3741865c6112","Type":"ContainerDied","Data":"89c60a9c1b946bfc5722fd23b375f98d5034197ff03babbcfd2ad1da6150b5bb"} Nov 26 08:43:11 crc kubenswrapper[4909]: I1126 08:43:11.648645 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" event={"ID":"0fa8db2c-a313-4764-abb5-3741865c6112","Type":"ContainerStarted","Data":"3c7e5ddf7dec43079117170843dfc580ec73edf8ec0b3aeaefb9d6f0aabfc8f8"} Nov 26 08:43:11 crc kubenswrapper[4909]: I1126 08:43:11.648824 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:11 crc kubenswrapper[4909]: I1126 08:43:11.678854 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" podStartSLOduration=3.678831424 podStartE2EDuration="3.678831424s" podCreationTimestamp="2025-11-26 08:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 08:43:11.667787413 +0000 UTC m=+6163.813998579" watchObservedRunningTime="2025-11-26 08:43:11.678831424 +0000 UTC m=+6163.825042590" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.587624 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm"] Nov 26 08:43:14 crc kubenswrapper[4909]: E1126 08:43:14.588774 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerName="dnsmasq-dns" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.588792 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerName="dnsmasq-dns" Nov 26 08:43:14 crc kubenswrapper[4909]: E1126 08:43:14.588830 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerName="init" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.588838 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerName="init" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.589184 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c10900-1ca3-451b-8f45-4244f9f77701" containerName="dnsmasq-dns" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.590231 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.609679 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.610084 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.611679 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.622005 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.647196 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm"] Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.678132 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.678640 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.678815 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.678998 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvnnw\" (UniqueName: \"kubernetes.io/projected/4a0598bc-22a7-47ca-af08-34d1f18acf20-kube-api-access-zvnnw\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.679207 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.781006 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.781097 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.781152 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvnnw\" (UniqueName: \"kubernetes.io/projected/4a0598bc-22a7-47ca-af08-34d1f18acf20-kube-api-access-zvnnw\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.781236 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.781391 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.787742 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.788472 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.788712 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.789443 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.804681 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvnnw\" (UniqueName: \"kubernetes.io/projected/4a0598bc-22a7-47ca-af08-34d1f18acf20-kube-api-access-zvnnw\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c979gm\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:14 crc kubenswrapper[4909]: I1126 08:43:14.938975 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:15 crc kubenswrapper[4909]: I1126 08:43:15.624232 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm"] Nov 26 08:43:15 crc kubenswrapper[4909]: I1126 08:43:15.629363 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:43:15 crc kubenswrapper[4909]: I1126 08:43:15.699040 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" event={"ID":"4a0598bc-22a7-47ca-af08-34d1f18acf20","Type":"ContainerStarted","Data":"11aad4798da1803c11d5e5f95e203b97ec00b8c0b07fee9261d7d8483938bc40"} Nov 26 08:43:18 crc kubenswrapper[4909]: I1126 08:43:18.872100 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-df8f9c6bc-25vnv" Nov 26 08:43:18 crc kubenswrapper[4909]: I1126 08:43:18.959502 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f77b9c99-l2nk7"] Nov 26 08:43:18 crc kubenswrapper[4909]: I1126 08:43:18.959803 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" containerName="dnsmasq-dns" containerID="cri-o://b3f8e7c62d41817866e2f4e1ac053223805d420e1c1b0fe65345c7fbdc70f781" gracePeriod=10 Nov 26 08:43:19 crc kubenswrapper[4909]: I1126 08:43:19.748491 4909 generic.go:334] "Generic (PLEG): container finished" podID="44aedccc-414c-4d23-9d07-887b803b1b02" containerID="b3f8e7c62d41817866e2f4e1ac053223805d420e1c1b0fe65345c7fbdc70f781" exitCode=0 Nov 26 08:43:19 crc kubenswrapper[4909]: I1126 08:43:19.748568 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" event={"ID":"44aedccc-414c-4d23-9d07-887b803b1b02","Type":"ContainerDied","Data":"b3f8e7c62d41817866e2f4e1ac053223805d420e1c1b0fe65345c7fbdc70f781"} Nov 26 08:43:23 crc kubenswrapper[4909]: I1126 08:43:23.254135 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.161:5353: connect: connection refused" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.653127 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.817566 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" event={"ID":"4a0598bc-22a7-47ca-af08-34d1f18acf20","Type":"ContainerStarted","Data":"81bc942f2e8990865d8cc5c178e216c310b88194c93a9b8d6c62ef5a3baf8ce6"} Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.819146 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" event={"ID":"44aedccc-414c-4d23-9d07-887b803b1b02","Type":"ContainerDied","Data":"bbdaca647bd0effce51b6d49db76ecca758c3ae1c27f6de9f44ea6da2cde7967"} Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.819190 4909 scope.go:117] "RemoveContainer" containerID="b3f8e7c62d41817866e2f4e1ac053223805d420e1c1b0fe65345c7fbdc70f781" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.819229 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f77b9c99-l2nk7" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.828258 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-sb\") pod \"44aedccc-414c-4d23-9d07-887b803b1b02\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.828445 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-dns-svc\") pod \"44aedccc-414c-4d23-9d07-887b803b1b02\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.828537 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-openstack-cell1\") pod \"44aedccc-414c-4d23-9d07-887b803b1b02\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.828565 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-nb\") pod \"44aedccc-414c-4d23-9d07-887b803b1b02\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.828768 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-config\") pod \"44aedccc-414c-4d23-9d07-887b803b1b02\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.828802 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djhrf\" (UniqueName: \"kubernetes.io/projected/44aedccc-414c-4d23-9d07-887b803b1b02-kube-api-access-djhrf\") pod \"44aedccc-414c-4d23-9d07-887b803b1b02\" (UID: \"44aedccc-414c-4d23-9d07-887b803b1b02\") " Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.833762 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44aedccc-414c-4d23-9d07-887b803b1b02-kube-api-access-djhrf" (OuterVolumeSpecName: "kube-api-access-djhrf") pod "44aedccc-414c-4d23-9d07-887b803b1b02" (UID: "44aedccc-414c-4d23-9d07-887b803b1b02"). InnerVolumeSpecName "kube-api-access-djhrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.845452 4909 scope.go:117] "RemoveContainer" containerID="83000b56c4dc085d5749411a175113c7e4b829c9d9885148bfcf7692fe2b58f6" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.852076 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" podStartSLOduration=2.155899637 podStartE2EDuration="11.852051152s" podCreationTimestamp="2025-11-26 08:43:14 +0000 UTC" firstStartedPulling="2025-11-26 08:43:15.62874007 +0000 UTC m=+6167.774951246" lastFinishedPulling="2025-11-26 08:43:25.324891575 +0000 UTC m=+6177.471102761" observedRunningTime="2025-11-26 08:43:25.84315467 +0000 UTC m=+6177.989365836" watchObservedRunningTime="2025-11-26 08:43:25.852051152 +0000 UTC m=+6177.998262328" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.894452 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44aedccc-414c-4d23-9d07-887b803b1b02" (UID: "44aedccc-414c-4d23-9d07-887b803b1b02"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.894457 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44aedccc-414c-4d23-9d07-887b803b1b02" (UID: "44aedccc-414c-4d23-9d07-887b803b1b02"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.896670 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "44aedccc-414c-4d23-9d07-887b803b1b02" (UID: "44aedccc-414c-4d23-9d07-887b803b1b02"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.899696 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44aedccc-414c-4d23-9d07-887b803b1b02" (UID: "44aedccc-414c-4d23-9d07-887b803b1b02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.924547 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-config" (OuterVolumeSpecName: "config") pod "44aedccc-414c-4d23-9d07-887b803b1b02" (UID: "44aedccc-414c-4d23-9d07-887b803b1b02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.932380 4909 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.932498 4909 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.932515 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.932528 4909 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-config\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.932561 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djhrf\" (UniqueName: \"kubernetes.io/projected/44aedccc-414c-4d23-9d07-887b803b1b02-kube-api-access-djhrf\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:25 crc kubenswrapper[4909]: I1126 08:43:25.932579 4909 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44aedccc-414c-4d23-9d07-887b803b1b02-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:26 crc kubenswrapper[4909]: I1126 08:43:26.174613 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f77b9c99-l2nk7"] Nov 26 08:43:26 crc kubenswrapper[4909]: I1126 08:43:26.182902 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65f77b9c99-l2nk7"] Nov 26 08:43:26 crc kubenswrapper[4909]: E1126 08:43:26.255681 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44aedccc_414c_4d23_9d07_887b803b1b02.slice/crio-bbdaca647bd0effce51b6d49db76ecca758c3ae1c27f6de9f44ea6da2cde7967\": RecentStats: unable to find data in memory cache]" Nov 26 08:43:26 crc kubenswrapper[4909]: I1126 08:43:26.511363 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" path="/var/lib/kubelet/pods/44aedccc-414c-4d23-9d07-887b803b1b02/volumes" Nov 26 08:43:39 crc kubenswrapper[4909]: I1126 08:43:39.973883 4909 generic.go:334] "Generic (PLEG): container finished" podID="4a0598bc-22a7-47ca-af08-34d1f18acf20" containerID="81bc942f2e8990865d8cc5c178e216c310b88194c93a9b8d6c62ef5a3baf8ce6" exitCode=0 Nov 26 08:43:39 crc kubenswrapper[4909]: I1126 08:43:39.974500 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" event={"ID":"4a0598bc-22a7-47ca-af08-34d1f18acf20","Type":"ContainerDied","Data":"81bc942f2e8990865d8cc5c178e216c310b88194c93a9b8d6c62ef5a3baf8ce6"} Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.681135 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.792272 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ceph\") pod \"4a0598bc-22a7-47ca-af08-34d1f18acf20\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.792359 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-inventory\") pod \"4a0598bc-22a7-47ca-af08-34d1f18acf20\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.792407 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ssh-key\") pod \"4a0598bc-22a7-47ca-af08-34d1f18acf20\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.792628 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-pre-adoption-validation-combined-ca-bundle\") pod \"4a0598bc-22a7-47ca-af08-34d1f18acf20\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.792670 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvnnw\" (UniqueName: \"kubernetes.io/projected/4a0598bc-22a7-47ca-af08-34d1f18acf20-kube-api-access-zvnnw\") pod \"4a0598bc-22a7-47ca-af08-34d1f18acf20\" (UID: \"4a0598bc-22a7-47ca-af08-34d1f18acf20\") " Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.798755 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "4a0598bc-22a7-47ca-af08-34d1f18acf20" (UID: "4a0598bc-22a7-47ca-af08-34d1f18acf20"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.798808 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ceph" (OuterVolumeSpecName: "ceph") pod "4a0598bc-22a7-47ca-af08-34d1f18acf20" (UID: "4a0598bc-22a7-47ca-af08-34d1f18acf20"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.798850 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0598bc-22a7-47ca-af08-34d1f18acf20-kube-api-access-zvnnw" (OuterVolumeSpecName: "kube-api-access-zvnnw") pod "4a0598bc-22a7-47ca-af08-34d1f18acf20" (UID: "4a0598bc-22a7-47ca-af08-34d1f18acf20"). InnerVolumeSpecName "kube-api-access-zvnnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.832861 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-inventory" (OuterVolumeSpecName: "inventory") pod "4a0598bc-22a7-47ca-af08-34d1f18acf20" (UID: "4a0598bc-22a7-47ca-af08-34d1f18acf20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.835190 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a0598bc-22a7-47ca-af08-34d1f18acf20" (UID: "4a0598bc-22a7-47ca-af08-34d1f18acf20"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.894985 4909 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.895021 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvnnw\" (UniqueName: \"kubernetes.io/projected/4a0598bc-22a7-47ca-af08-34d1f18acf20-kube-api-access-zvnnw\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.895036 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.895048 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:41 crc kubenswrapper[4909]: I1126 08:43:41.895059 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a0598bc-22a7-47ca-af08-34d1f18acf20-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 08:43:42 crc kubenswrapper[4909]: I1126 08:43:42.005128 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" event={"ID":"4a0598bc-22a7-47ca-af08-34d1f18acf20","Type":"ContainerDied","Data":"11aad4798da1803c11d5e5f95e203b97ec00b8c0b07fee9261d7d8483938bc40"} Nov 26 08:43:42 crc kubenswrapper[4909]: I1126 08:43:42.005183 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11aad4798da1803c11d5e5f95e203b97ec00b8c0b07fee9261d7d8483938bc40" Nov 26 08:43:42 crc kubenswrapper[4909]: I1126 08:43:42.005310 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c979gm" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.299190 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq"] Nov 26 08:43:52 crc kubenswrapper[4909]: E1126 08:43:52.300315 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0598bc-22a7-47ca-af08-34d1f18acf20" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.300333 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0598bc-22a7-47ca-af08-34d1f18acf20" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 26 08:43:52 crc kubenswrapper[4909]: E1126 08:43:52.300366 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" containerName="dnsmasq-dns" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.300374 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" containerName="dnsmasq-dns" Nov 26 08:43:52 crc kubenswrapper[4909]: E1126 08:43:52.300395 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" containerName="init" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.300404 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" containerName="init" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.300664 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0598bc-22a7-47ca-af08-34d1f18acf20" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.300713 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="44aedccc-414c-4d23-9d07-887b803b1b02" containerName="dnsmasq-dns" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.301772 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.304577 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.304861 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.305730 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.305732 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.311568 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq"] Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.325634 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.325703 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.326005 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.326077 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76z9w\" (UniqueName: \"kubernetes.io/projected/e0f2810b-5183-4439-88f2-7c47010a5aa9-kube-api-access-76z9w\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.326118 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.427823 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.427900 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76z9w\" (UniqueName: \"kubernetes.io/projected/e0f2810b-5183-4439-88f2-7c47010a5aa9-kube-api-access-76z9w\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.427934 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.427992 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.428014 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.434451 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.438227 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.444850 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.450808 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76z9w\" (UniqueName: \"kubernetes.io/projected/e0f2810b-5183-4439-88f2-7c47010a5aa9-kube-api-access-76z9w\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.452129 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:52 crc kubenswrapper[4909]: I1126 08:43:52.624917 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:43:53 crc kubenswrapper[4909]: I1126 08:43:53.293161 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq"] Nov 26 08:43:54 crc kubenswrapper[4909]: I1126 08:43:54.152284 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" event={"ID":"e0f2810b-5183-4439-88f2-7c47010a5aa9","Type":"ContainerStarted","Data":"a427b6bd81fec3680d9f3438d3039e72c92be2ba3636a81f0107a99bb35f3a87"} Nov 26 08:43:55 crc kubenswrapper[4909]: I1126 08:43:55.176380 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" event={"ID":"e0f2810b-5183-4439-88f2-7c47010a5aa9","Type":"ContainerStarted","Data":"d030d9b92413a5c9af265aeda2917e5b7fa55d8744223b98fc160eab9976b311"} Nov 26 08:43:55 crc kubenswrapper[4909]: I1126 08:43:55.210085 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" podStartSLOduration=2.631381538 podStartE2EDuration="3.210063909s" podCreationTimestamp="2025-11-26 08:43:52 +0000 UTC" firstStartedPulling="2025-11-26 08:43:53.296845342 +0000 UTC m=+6205.443056508" lastFinishedPulling="2025-11-26 08:43:53.875527713 +0000 UTC m=+6206.021738879" observedRunningTime="2025-11-26 08:43:55.200897359 +0000 UTC m=+6207.347108535" watchObservedRunningTime="2025-11-26 08:43:55.210063909 +0000 UTC m=+6207.356275085" Nov 26 08:44:09 crc kubenswrapper[4909]: I1126 08:44:09.356020 4909 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.885906549s: [/var/lib/containers/storage/overlay/d34ce34b432d3b0e8f8f9b325984a5b53772ed7e5fd6947f256c70ff8a301f82/diff /var/log/pods/openstack_openstack-cell1-galera-0_0030125a-9381-4664-9a8f-bcc4a9a812e7/galera/0.log]; will not log again for this container unless duration exceeds 2s Nov 26 08:44:24 crc kubenswrapper[4909]: I1126 08:44:24.050232 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-5hhlc"] Nov 26 08:44:24 crc kubenswrapper[4909]: I1126 08:44:24.066321 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-5hhlc"] Nov 26 08:44:24 crc kubenswrapper[4909]: I1126 08:44:24.514559 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0cc19d-c7f8-467c-ad24-8217faef3b6f" path="/var/lib/kubelet/pods/7e0cc19d-c7f8-467c-ad24-8217faef3b6f/volumes" Nov 26 08:44:35 crc kubenswrapper[4909]: I1126 08:44:35.031182 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-d8df-account-create-vbm7m"] Nov 26 08:44:35 crc kubenswrapper[4909]: I1126 08:44:35.048391 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-d8df-account-create-vbm7m"] Nov 26 08:44:36 crc kubenswrapper[4909]: I1126 08:44:36.516002 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2987232-00ac-4875-9064-2702269119f7" path="/var/lib/kubelet/pods/b2987232-00ac-4875-9064-2702269119f7/volumes" Nov 26 08:44:41 crc kubenswrapper[4909]: I1126 08:44:41.040769 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-dtc4m"] Nov 26 08:44:41 crc kubenswrapper[4909]: I1126 08:44:41.058445 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-dtc4m"] Nov 26 08:44:42 crc kubenswrapper[4909]: I1126 08:44:42.517782 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96be849b-680f-4f74-9855-3194ef1b3969" path="/var/lib/kubelet/pods/96be849b-680f-4f74-9855-3194ef1b3969/volumes" Nov 26 08:44:44 crc kubenswrapper[4909]: I1126 08:44:44.661830 4909 scope.go:117] "RemoveContainer" containerID="9fc7548e3a2d3ba71a5b6f31b0cdb4e60a3aa82b70ede327a986289af30a2887" Nov 26 08:44:44 crc kubenswrapper[4909]: I1126 08:44:44.703118 4909 scope.go:117] "RemoveContainer" containerID="28b163d0e748eabfffc0e4ca848bfe51a2c3b38e6ade0ac1e77fbdf8fa34570b" Nov 26 08:44:44 crc kubenswrapper[4909]: I1126 08:44:44.749392 4909 scope.go:117] "RemoveContainer" containerID="65209f94e941abfd4b3f3daac56b2b6aca783e4301670e42d1d7336549a52d9c" Nov 26 08:44:52 crc kubenswrapper[4909]: I1126 08:44:52.048431 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-5358-account-create-rljm2"] Nov 26 08:44:52 crc kubenswrapper[4909]: I1126 08:44:52.054354 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-5358-account-create-rljm2"] Nov 26 08:44:52 crc kubenswrapper[4909]: I1126 08:44:52.514933 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12d5303-4df7-444c-a843-699aacd819b8" path="/var/lib/kubelet/pods/d12d5303-4df7-444c-a843-699aacd819b8/volumes" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.169376 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l"] Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.172407 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.175052 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.175315 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.183485 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l"] Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.369844 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a8bc06-8773-4a26-a767-7f4dbc4a6643-secret-volume\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.370212 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a8bc06-8773-4a26-a767-7f4dbc4a6643-config-volume\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.370321 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52c72\" (UniqueName: \"kubernetes.io/projected/76a8bc06-8773-4a26-a767-7f4dbc4a6643-kube-api-access-52c72\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.473088 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a8bc06-8773-4a26-a767-7f4dbc4a6643-secret-volume\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.473226 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a8bc06-8773-4a26-a767-7f4dbc4a6643-config-volume\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.473272 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52c72\" (UniqueName: \"kubernetes.io/projected/76a8bc06-8773-4a26-a767-7f4dbc4a6643-kube-api-access-52c72\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.475154 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a8bc06-8773-4a26-a767-7f4dbc4a6643-config-volume\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.484645 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a8bc06-8773-4a26-a767-7f4dbc4a6643-secret-volume\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.500570 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52c72\" (UniqueName: \"kubernetes.io/projected/76a8bc06-8773-4a26-a767-7f4dbc4a6643-kube-api-access-52c72\") pod \"collect-profiles-29402445-d498l\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:00 crc kubenswrapper[4909]: I1126 08:45:00.533465 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:01 crc kubenswrapper[4909]: I1126 08:45:01.036220 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l"] Nov 26 08:45:01 crc kubenswrapper[4909]: W1126 08:45:01.037982 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76a8bc06_8773_4a26_a767_7f4dbc4a6643.slice/crio-056afc5ca07ce5e1a6f3df23fd6aadcdbc1d04d07ebc7325b2bed3cee0e71a8e WatchSource:0}: Error finding container 056afc5ca07ce5e1a6f3df23fd6aadcdbc1d04d07ebc7325b2bed3cee0e71a8e: Status 404 returned error can't find the container with id 056afc5ca07ce5e1a6f3df23fd6aadcdbc1d04d07ebc7325b2bed3cee0e71a8e Nov 26 08:45:02 crc kubenswrapper[4909]: I1126 08:45:02.009703 4909 generic.go:334] "Generic (PLEG): container finished" podID="76a8bc06-8773-4a26-a767-7f4dbc4a6643" containerID="45de15107549009e775b1325ad9fe7d5522563e2ceb663804c45d9eeec53674b" exitCode=0 Nov 26 08:45:02 crc kubenswrapper[4909]: I1126 08:45:02.009831 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" event={"ID":"76a8bc06-8773-4a26-a767-7f4dbc4a6643","Type":"ContainerDied","Data":"45de15107549009e775b1325ad9fe7d5522563e2ceb663804c45d9eeec53674b"} Nov 26 08:45:02 crc kubenswrapper[4909]: I1126 08:45:02.010103 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" event={"ID":"76a8bc06-8773-4a26-a767-7f4dbc4a6643","Type":"ContainerStarted","Data":"056afc5ca07ce5e1a6f3df23fd6aadcdbc1d04d07ebc7325b2bed3cee0e71a8e"} Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.429515 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.544964 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a8bc06-8773-4a26-a767-7f4dbc4a6643-secret-volume\") pod \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.545145 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a8bc06-8773-4a26-a767-7f4dbc4a6643-config-volume\") pod \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.545228 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52c72\" (UniqueName: \"kubernetes.io/projected/76a8bc06-8773-4a26-a767-7f4dbc4a6643-kube-api-access-52c72\") pod \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\" (UID: \"76a8bc06-8773-4a26-a767-7f4dbc4a6643\") " Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.545838 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a8bc06-8773-4a26-a767-7f4dbc4a6643-config-volume" (OuterVolumeSpecName: "config-volume") pod "76a8bc06-8773-4a26-a767-7f4dbc4a6643" (UID: "76a8bc06-8773-4a26-a767-7f4dbc4a6643"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.546028 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a8bc06-8773-4a26-a767-7f4dbc4a6643-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.551438 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a8bc06-8773-4a26-a767-7f4dbc4a6643-kube-api-access-52c72" (OuterVolumeSpecName: "kube-api-access-52c72") pod "76a8bc06-8773-4a26-a767-7f4dbc4a6643" (UID: "76a8bc06-8773-4a26-a767-7f4dbc4a6643"). InnerVolumeSpecName "kube-api-access-52c72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.565821 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a8bc06-8773-4a26-a767-7f4dbc4a6643-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76a8bc06-8773-4a26-a767-7f4dbc4a6643" (UID: "76a8bc06-8773-4a26-a767-7f4dbc4a6643"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.648187 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52c72\" (UniqueName: \"kubernetes.io/projected/76a8bc06-8773-4a26-a767-7f4dbc4a6643-kube-api-access-52c72\") on node \"crc\" DevicePath \"\"" Nov 26 08:45:03 crc kubenswrapper[4909]: I1126 08:45:03.648233 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a8bc06-8773-4a26-a767-7f4dbc4a6643-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 08:45:04 crc kubenswrapper[4909]: I1126 08:45:04.033785 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" event={"ID":"76a8bc06-8773-4a26-a767-7f4dbc4a6643","Type":"ContainerDied","Data":"056afc5ca07ce5e1a6f3df23fd6aadcdbc1d04d07ebc7325b2bed3cee0e71a8e"} Nov 26 08:45:04 crc kubenswrapper[4909]: I1126 08:45:04.034444 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056afc5ca07ce5e1a6f3df23fd6aadcdbc1d04d07ebc7325b2bed3cee0e71a8e" Nov 26 08:45:04 crc kubenswrapper[4909]: I1126 08:45:04.034079 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l" Nov 26 08:45:04 crc kubenswrapper[4909]: I1126 08:45:04.497527 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh"] Nov 26 08:45:04 crc kubenswrapper[4909]: I1126 08:45:04.511634 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402400-6nxvh"] Nov 26 08:45:06 crc kubenswrapper[4909]: I1126 08:45:06.525479 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c1c2a9-3b99-416e-be50-db485df71b18" path="/var/lib/kubelet/pods/26c1c2a9-3b99-416e-be50-db485df71b18/volumes" Nov 26 08:45:07 crc kubenswrapper[4909]: I1126 08:45:07.301505 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:45:07 crc kubenswrapper[4909]: I1126 08:45:07.301620 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:45:37 crc kubenswrapper[4909]: I1126 08:45:37.301543 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:45:37 crc kubenswrapper[4909]: I1126 08:45:37.302152 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:45:38 crc kubenswrapper[4909]: I1126 08:45:38.083076 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-7rb9k"] Nov 26 08:45:38 crc kubenswrapper[4909]: I1126 08:45:38.105267 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-7rb9k"] Nov 26 08:45:38 crc kubenswrapper[4909]: I1126 08:45:38.535712 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca41f09-57ae-4c80-b0eb-bccd0c02a141" path="/var/lib/kubelet/pods/9ca41f09-57ae-4c80-b0eb-bccd0c02a141/volumes" Nov 26 08:45:44 crc kubenswrapper[4909]: I1126 08:45:44.901670 4909 scope.go:117] "RemoveContainer" containerID="7671d9e1b14def9c86dee2661f26b1aac377fc0ea53aff520ec83103d04a94f3" Nov 26 08:45:44 crc kubenswrapper[4909]: I1126 08:45:44.945386 4909 scope.go:117] "RemoveContainer" containerID="3108bf2120b949b6c88fc56cac72838ec3c1562cb9b0273d10751296ec6f439f" Nov 26 08:45:45 crc kubenswrapper[4909]: I1126 08:45:45.017024 4909 scope.go:117] "RemoveContainer" containerID="c254ad81c035c1a22659c89c7aca9b511637ccee7728d3cdabc5f6eb141ddc4f" Nov 26 08:45:45 crc kubenswrapper[4909]: I1126 08:45:45.074950 4909 scope.go:117] "RemoveContainer" containerID="4fe18fa3f527cffb7a86c2dd69516ea7fb496b60b13e1f884c03dfce26456a2a" Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.300862 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.301807 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.301897 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.303325 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.303464 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" gracePeriod=600 Nov 26 08:46:07 crc kubenswrapper[4909]: E1126 08:46:07.429775 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.771189 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" exitCode=0 Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.771264 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622"} Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.771607 4909 scope.go:117] "RemoveContainer" containerID="ffb78bc615b04ce4741527e4e6ccf051dacb1f8a8314365cec3e038b02b7715e" Nov 26 08:46:07 crc kubenswrapper[4909]: I1126 08:46:07.772470 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:46:07 crc kubenswrapper[4909]: E1126 08:46:07.772968 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:46:21 crc kubenswrapper[4909]: I1126 08:46:21.499907 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:46:21 crc kubenswrapper[4909]: E1126 08:46:21.500734 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.352324 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h46qk"] Nov 26 08:46:29 crc kubenswrapper[4909]: E1126 08:46:29.353928 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a8bc06-8773-4a26-a767-7f4dbc4a6643" containerName="collect-profiles" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.353946 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a8bc06-8773-4a26-a767-7f4dbc4a6643" containerName="collect-profiles" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.354254 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a8bc06-8773-4a26-a767-7f4dbc4a6643" containerName="collect-profiles" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.360964 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.379526 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h46qk"] Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.431118 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-utilities\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.431199 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgsb5\" (UniqueName: \"kubernetes.io/projected/65da4f0e-6db6-4208-be70-247559c342af-kube-api-access-lgsb5\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.431488 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-catalog-content\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.534231 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-catalog-content\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.534319 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-utilities\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.534354 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgsb5\" (UniqueName: \"kubernetes.io/projected/65da4f0e-6db6-4208-be70-247559c342af-kube-api-access-lgsb5\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.536355 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-catalog-content\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.536936 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-utilities\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.558648 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgsb5\" (UniqueName: \"kubernetes.io/projected/65da4f0e-6db6-4208-be70-247559c342af-kube-api-access-lgsb5\") pod \"community-operators-h46qk\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:29 crc kubenswrapper[4909]: I1126 08:46:29.693189 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:30 crc kubenswrapper[4909]: I1126 08:46:30.252742 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h46qk"] Nov 26 08:46:31 crc kubenswrapper[4909]: I1126 08:46:31.079380 4909 generic.go:334] "Generic (PLEG): container finished" podID="65da4f0e-6db6-4208-be70-247559c342af" containerID="33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814" exitCode=0 Nov 26 08:46:31 crc kubenswrapper[4909]: I1126 08:46:31.079514 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h46qk" event={"ID":"65da4f0e-6db6-4208-be70-247559c342af","Type":"ContainerDied","Data":"33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814"} Nov 26 08:46:31 crc kubenswrapper[4909]: I1126 08:46:31.079887 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h46qk" event={"ID":"65da4f0e-6db6-4208-be70-247559c342af","Type":"ContainerStarted","Data":"4a651fe73d9de2182f3eb50fbc34c8c451daa99aadcf3a2deb0bc8af09c670d2"} Nov 26 08:46:32 crc kubenswrapper[4909]: I1126 08:46:32.499841 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:46:32 crc kubenswrapper[4909]: E1126 08:46:32.500770 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:46:33 crc kubenswrapper[4909]: I1126 08:46:33.126176 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h46qk" event={"ID":"65da4f0e-6db6-4208-be70-247559c342af","Type":"ContainerStarted","Data":"b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2"} Nov 26 08:46:34 crc kubenswrapper[4909]: I1126 08:46:34.144011 4909 generic.go:334] "Generic (PLEG): container finished" podID="65da4f0e-6db6-4208-be70-247559c342af" containerID="b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2" exitCode=0 Nov 26 08:46:34 crc kubenswrapper[4909]: I1126 08:46:34.144061 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h46qk" event={"ID":"65da4f0e-6db6-4208-be70-247559c342af","Type":"ContainerDied","Data":"b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2"} Nov 26 08:46:35 crc kubenswrapper[4909]: I1126 08:46:35.161859 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h46qk" event={"ID":"65da4f0e-6db6-4208-be70-247559c342af","Type":"ContainerStarted","Data":"127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13"} Nov 26 08:46:35 crc kubenswrapper[4909]: I1126 08:46:35.193969 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h46qk" podStartSLOduration=2.729256587 podStartE2EDuration="6.193950348s" podCreationTimestamp="2025-11-26 08:46:29 +0000 UTC" firstStartedPulling="2025-11-26 08:46:31.082376609 +0000 UTC m=+6363.228587785" lastFinishedPulling="2025-11-26 08:46:34.54707037 +0000 UTC m=+6366.693281546" observedRunningTime="2025-11-26 08:46:35.187676268 +0000 UTC m=+6367.333887434" watchObservedRunningTime="2025-11-26 08:46:35.193950348 +0000 UTC m=+6367.340161514" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.709903 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lz829"] Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.712609 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.751719 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz829"] Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.823232 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8kv\" (UniqueName: \"kubernetes.io/projected/07b58d2d-0abb-4c2a-af4f-671e9f123881-kube-api-access-kw8kv\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.823327 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-catalog-content\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.823461 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-utilities\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.925368 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-catalog-content\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.925442 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-utilities\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.925561 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8kv\" (UniqueName: \"kubernetes.io/projected/07b58d2d-0abb-4c2a-af4f-671e9f123881-kube-api-access-kw8kv\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.926056 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-catalog-content\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.926056 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-utilities\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:36 crc kubenswrapper[4909]: I1126 08:46:36.943274 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8kv\" (UniqueName: \"kubernetes.io/projected/07b58d2d-0abb-4c2a-af4f-671e9f123881-kube-api-access-kw8kv\") pod \"redhat-marketplace-lz829\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:37 crc kubenswrapper[4909]: I1126 08:46:37.049091 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:37 crc kubenswrapper[4909]: I1126 08:46:37.582100 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz829"] Nov 26 08:46:38 crc kubenswrapper[4909]: I1126 08:46:38.193437 4909 generic.go:334] "Generic (PLEG): container finished" podID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerID="547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01" exitCode=0 Nov 26 08:46:38 crc kubenswrapper[4909]: I1126 08:46:38.193525 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz829" event={"ID":"07b58d2d-0abb-4c2a-af4f-671e9f123881","Type":"ContainerDied","Data":"547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01"} Nov 26 08:46:38 crc kubenswrapper[4909]: I1126 08:46:38.193712 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz829" event={"ID":"07b58d2d-0abb-4c2a-af4f-671e9f123881","Type":"ContainerStarted","Data":"7c6ae31782d38cd5141d0e1f5c49345039434dba0c499ea239526b1f51abfc9a"} Nov 26 08:46:39 crc kubenswrapper[4909]: I1126 08:46:39.203751 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz829" event={"ID":"07b58d2d-0abb-4c2a-af4f-671e9f123881","Type":"ContainerStarted","Data":"e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68"} Nov 26 08:46:39 crc kubenswrapper[4909]: I1126 08:46:39.694376 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:39 crc kubenswrapper[4909]: I1126 08:46:39.694888 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:39 crc kubenswrapper[4909]: I1126 08:46:39.777328 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:40 crc kubenswrapper[4909]: I1126 08:46:40.218645 4909 generic.go:334] "Generic (PLEG): container finished" podID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerID="e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68" exitCode=0 Nov 26 08:46:40 crc kubenswrapper[4909]: I1126 08:46:40.220517 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz829" event={"ID":"07b58d2d-0abb-4c2a-af4f-671e9f123881","Type":"ContainerDied","Data":"e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68"} Nov 26 08:46:40 crc kubenswrapper[4909]: I1126 08:46:40.286609 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:41 crc kubenswrapper[4909]: I1126 08:46:41.234736 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz829" event={"ID":"07b58d2d-0abb-4c2a-af4f-671e9f123881","Type":"ContainerStarted","Data":"6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011"} Nov 26 08:46:41 crc kubenswrapper[4909]: I1126 08:46:41.270471 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lz829" podStartSLOduration=2.810780883 podStartE2EDuration="5.270447392s" podCreationTimestamp="2025-11-26 08:46:36 +0000 UTC" firstStartedPulling="2025-11-26 08:46:38.195738552 +0000 UTC m=+6370.341949728" lastFinishedPulling="2025-11-26 08:46:40.655405071 +0000 UTC m=+6372.801616237" observedRunningTime="2025-11-26 08:46:41.257811387 +0000 UTC m=+6373.404022573" watchObservedRunningTime="2025-11-26 08:46:41.270447392 +0000 UTC m=+6373.416658568" Nov 26 08:46:42 crc kubenswrapper[4909]: I1126 08:46:42.098327 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h46qk"] Nov 26 08:46:43 crc kubenswrapper[4909]: I1126 08:46:43.259027 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h46qk" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="registry-server" containerID="cri-o://127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13" gracePeriod=2 Nov 26 08:46:43 crc kubenswrapper[4909]: I1126 08:46:43.500259 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:46:43 crc kubenswrapper[4909]: E1126 08:46:43.500833 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:46:43 crc kubenswrapper[4909]: I1126 08:46:43.878152 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.043196 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgsb5\" (UniqueName: \"kubernetes.io/projected/65da4f0e-6db6-4208-be70-247559c342af-kube-api-access-lgsb5\") pod \"65da4f0e-6db6-4208-be70-247559c342af\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.043294 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-catalog-content\") pod \"65da4f0e-6db6-4208-be70-247559c342af\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.043566 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-utilities\") pod \"65da4f0e-6db6-4208-be70-247559c342af\" (UID: \"65da4f0e-6db6-4208-be70-247559c342af\") " Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.045803 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-utilities" (OuterVolumeSpecName: "utilities") pod "65da4f0e-6db6-4208-be70-247559c342af" (UID: "65da4f0e-6db6-4208-be70-247559c342af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.049660 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65da4f0e-6db6-4208-be70-247559c342af-kube-api-access-lgsb5" (OuterVolumeSpecName: "kube-api-access-lgsb5") pod "65da4f0e-6db6-4208-be70-247559c342af" (UID: "65da4f0e-6db6-4208-be70-247559c342af"). InnerVolumeSpecName "kube-api-access-lgsb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.101552 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65da4f0e-6db6-4208-be70-247559c342af" (UID: "65da4f0e-6db6-4208-be70-247559c342af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.145675 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.145709 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgsb5\" (UniqueName: \"kubernetes.io/projected/65da4f0e-6db6-4208-be70-247559c342af-kube-api-access-lgsb5\") on node \"crc\" DevicePath \"\"" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.145720 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65da4f0e-6db6-4208-be70-247559c342af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.273424 4909 generic.go:334] "Generic (PLEG): container finished" podID="65da4f0e-6db6-4208-be70-247559c342af" containerID="127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13" exitCode=0 Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.273470 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h46qk" event={"ID":"65da4f0e-6db6-4208-be70-247559c342af","Type":"ContainerDied","Data":"127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13"} Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.273496 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h46qk" event={"ID":"65da4f0e-6db6-4208-be70-247559c342af","Type":"ContainerDied","Data":"4a651fe73d9de2182f3eb50fbc34c8c451daa99aadcf3a2deb0bc8af09c670d2"} Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.273512 4909 scope.go:117] "RemoveContainer" containerID="127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.273668 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h46qk" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.327540 4909 scope.go:117] "RemoveContainer" containerID="b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.337925 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h46qk"] Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.356986 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h46qk"] Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.373103 4909 scope.go:117] "RemoveContainer" containerID="33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.425001 4909 scope.go:117] "RemoveContainer" containerID="127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13" Nov 26 08:46:44 crc kubenswrapper[4909]: E1126 08:46:44.426751 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13\": container with ID starting with 127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13 not found: ID does not exist" containerID="127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.426783 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13"} err="failed to get container status \"127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13\": rpc error: code = NotFound desc = could not find container \"127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13\": container with ID starting with 127ebe13346f40404e86c9c6f66dce2e35f976f1e368f0ec041a967f1fe70b13 not found: ID does not exist" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.426803 4909 scope.go:117] "RemoveContainer" containerID="b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2" Nov 26 08:46:44 crc kubenswrapper[4909]: E1126 08:46:44.427441 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2\": container with ID starting with b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2 not found: ID does not exist" containerID="b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.427492 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2"} err="failed to get container status \"b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2\": rpc error: code = NotFound desc = could not find container \"b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2\": container with ID starting with b84c58499f0b9bccf8af33ec629aa1045cb1b82845fd58fac15981838d3e67f2 not found: ID does not exist" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.427524 4909 scope.go:117] "RemoveContainer" containerID="33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814" Nov 26 08:46:44 crc kubenswrapper[4909]: E1126 08:46:44.428014 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814\": container with ID starting with 33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814 not found: ID does not exist" containerID="33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.428133 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814"} err="failed to get container status \"33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814\": rpc error: code = NotFound desc = could not find container \"33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814\": container with ID starting with 33ecf9638980652532b532ab61a5b7c8f9f88a2c245b7b2f92b89db3017da814 not found: ID does not exist" Nov 26 08:46:44 crc kubenswrapper[4909]: I1126 08:46:44.522145 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65da4f0e-6db6-4208-be70-247559c342af" path="/var/lib/kubelet/pods/65da4f0e-6db6-4208-be70-247559c342af/volumes" Nov 26 08:46:47 crc kubenswrapper[4909]: I1126 08:46:47.049721 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:47 crc kubenswrapper[4909]: I1126 08:46:47.050715 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:47 crc kubenswrapper[4909]: I1126 08:46:47.118921 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:47 crc kubenswrapper[4909]: I1126 08:46:47.372812 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:48 crc kubenswrapper[4909]: I1126 08:46:48.300958 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz829"] Nov 26 08:46:49 crc kubenswrapper[4909]: I1126 08:46:49.337677 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lz829" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="registry-server" containerID="cri-o://6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011" gracePeriod=2 Nov 26 08:46:49 crc kubenswrapper[4909]: I1126 08:46:49.900775 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.081204 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-utilities\") pod \"07b58d2d-0abb-4c2a-af4f-671e9f123881\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.083008 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-catalog-content\") pod \"07b58d2d-0abb-4c2a-af4f-671e9f123881\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.082101 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-utilities" (OuterVolumeSpecName: "utilities") pod "07b58d2d-0abb-4c2a-af4f-671e9f123881" (UID: "07b58d2d-0abb-4c2a-af4f-671e9f123881"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.087088 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8kv\" (UniqueName: \"kubernetes.io/projected/07b58d2d-0abb-4c2a-af4f-671e9f123881-kube-api-access-kw8kv\") pod \"07b58d2d-0abb-4c2a-af4f-671e9f123881\" (UID: \"07b58d2d-0abb-4c2a-af4f-671e9f123881\") " Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.087851 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.094424 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b58d2d-0abb-4c2a-af4f-671e9f123881-kube-api-access-kw8kv" (OuterVolumeSpecName: "kube-api-access-kw8kv") pod "07b58d2d-0abb-4c2a-af4f-671e9f123881" (UID: "07b58d2d-0abb-4c2a-af4f-671e9f123881"). InnerVolumeSpecName "kube-api-access-kw8kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.103839 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07b58d2d-0abb-4c2a-af4f-671e9f123881" (UID: "07b58d2d-0abb-4c2a-af4f-671e9f123881"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.189932 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b58d2d-0abb-4c2a-af4f-671e9f123881-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.190262 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw8kv\" (UniqueName: \"kubernetes.io/projected/07b58d2d-0abb-4c2a-af4f-671e9f123881-kube-api-access-kw8kv\") on node \"crc\" DevicePath \"\"" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.348075 4909 generic.go:334] "Generic (PLEG): container finished" podID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerID="6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011" exitCode=0 Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.348133 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz829" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.348151 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz829" event={"ID":"07b58d2d-0abb-4c2a-af4f-671e9f123881","Type":"ContainerDied","Data":"6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011"} Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.349469 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz829" event={"ID":"07b58d2d-0abb-4c2a-af4f-671e9f123881","Type":"ContainerDied","Data":"7c6ae31782d38cd5141d0e1f5c49345039434dba0c499ea239526b1f51abfc9a"} Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.349550 4909 scope.go:117] "RemoveContainer" containerID="6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.386230 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz829"] Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.393542 4909 scope.go:117] "RemoveContainer" containerID="e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.395060 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz829"] Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.416276 4909 scope.go:117] "RemoveContainer" containerID="547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.475028 4909 scope.go:117] "RemoveContainer" containerID="6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011" Nov 26 08:46:50 crc kubenswrapper[4909]: E1126 08:46:50.475642 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011\": container with ID starting with 6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011 not found: ID does not exist" containerID="6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.475715 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011"} err="failed to get container status \"6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011\": rpc error: code = NotFound desc = could not find container \"6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011\": container with ID starting with 6d91d4c7d3dc36038ccf76ec8ea8fd483ccfecb197e9e2d830a7996f9c190011 not found: ID does not exist" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.475743 4909 scope.go:117] "RemoveContainer" containerID="e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68" Nov 26 08:46:50 crc kubenswrapper[4909]: E1126 08:46:50.476291 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68\": container with ID starting with e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68 not found: ID does not exist" containerID="e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.476335 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68"} err="failed to get container status \"e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68\": rpc error: code = NotFound desc = could not find container \"e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68\": container with ID starting with e837b9db9857f95a65c9cc7bcbea3684e1afabe24ae4e9856b90cb57efa05f68 not found: ID does not exist" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.476350 4909 scope.go:117] "RemoveContainer" containerID="547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01" Nov 26 08:46:50 crc kubenswrapper[4909]: E1126 08:46:50.476758 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01\": container with ID starting with 547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01 not found: ID does not exist" containerID="547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.476785 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01"} err="failed to get container status \"547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01\": rpc error: code = NotFound desc = could not find container \"547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01\": container with ID starting with 547714d95b177e224ba76215ebb8554b78e113b7eaa47479cdb12eb979ff0f01 not found: ID does not exist" Nov 26 08:46:50 crc kubenswrapper[4909]: I1126 08:46:50.511291 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" path="/var/lib/kubelet/pods/07b58d2d-0abb-4c2a-af4f-671e9f123881/volumes" Nov 26 08:46:55 crc kubenswrapper[4909]: I1126 08:46:55.500575 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:46:55 crc kubenswrapper[4909]: E1126 08:46:55.501471 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:47:07 crc kubenswrapper[4909]: I1126 08:47:07.499722 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:47:07 crc kubenswrapper[4909]: E1126 08:47:07.501023 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:47:20 crc kubenswrapper[4909]: I1126 08:47:20.499259 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:47:20 crc kubenswrapper[4909]: E1126 08:47:20.500485 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:47:32 crc kubenswrapper[4909]: I1126 08:47:32.498742 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:47:32 crc kubenswrapper[4909]: E1126 08:47:32.499906 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:47:47 crc kubenswrapper[4909]: I1126 08:47:47.499659 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:47:47 crc kubenswrapper[4909]: E1126 08:47:47.500968 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:47:58 crc kubenswrapper[4909]: I1126 08:47:58.506311 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:47:58 crc kubenswrapper[4909]: E1126 08:47:58.507038 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:48:12 crc kubenswrapper[4909]: I1126 08:48:12.499821 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:48:12 crc kubenswrapper[4909]: E1126 08:48:12.501029 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:48:17 crc kubenswrapper[4909]: I1126 08:48:17.057799 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-9nhsg"] Nov 26 08:48:17 crc kubenswrapper[4909]: I1126 08:48:17.072069 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-9nhsg"] Nov 26 08:48:18 crc kubenswrapper[4909]: I1126 08:48:18.516003 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e45ff8-93e7-4ff0-a81e-c5b8236e22d8" path="/var/lib/kubelet/pods/03e45ff8-93e7-4ff0-a81e-c5b8236e22d8/volumes" Nov 26 08:48:25 crc kubenswrapper[4909]: I1126 08:48:25.499533 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:48:25 crc kubenswrapper[4909]: E1126 08:48:25.501282 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:48:27 crc kubenswrapper[4909]: I1126 08:48:27.044993 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-5def-account-create-n6s9q"] Nov 26 08:48:27 crc kubenswrapper[4909]: I1126 08:48:27.060278 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-5def-account-create-n6s9q"] Nov 26 08:48:28 crc kubenswrapper[4909]: I1126 08:48:28.520252 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580fbd26-06b0-4964-8be0-6b93f1b99690" path="/var/lib/kubelet/pods/580fbd26-06b0-4964-8be0-6b93f1b99690/volumes" Nov 26 08:48:37 crc kubenswrapper[4909]: I1126 08:48:37.499577 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:48:37 crc kubenswrapper[4909]: E1126 08:48:37.500488 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:48:41 crc kubenswrapper[4909]: I1126 08:48:41.057169 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-nttlz"] Nov 26 08:48:41 crc kubenswrapper[4909]: I1126 08:48:41.076525 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-nttlz"] Nov 26 08:48:42 crc kubenswrapper[4909]: I1126 08:48:42.513721 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e4e5e9-e628-4196-99c3-d882790cf706" path="/var/lib/kubelet/pods/55e4e5e9-e628-4196-99c3-d882790cf706/volumes" Nov 26 08:48:45 crc kubenswrapper[4909]: I1126 08:48:45.328182 4909 scope.go:117] "RemoveContainer" containerID="4dea50062af10be4784171c7ec91c1635a1e085821cb27669534fba1d44b53e9" Nov 26 08:48:45 crc kubenswrapper[4909]: I1126 08:48:45.377839 4909 scope.go:117] "RemoveContainer" containerID="56a3f68b2b0d31727edf8235e6a3e0db4c64981bf1b59864317b0df6ac6df6c3" Nov 26 08:48:45 crc kubenswrapper[4909]: I1126 08:48:45.469221 4909 scope.go:117] "RemoveContainer" containerID="ac55f08cc9d834a79f32abe463ee43272e603431894cc6753b4e9195a5e37730" Nov 26 08:48:48 crc kubenswrapper[4909]: I1126 08:48:48.509360 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:48:48 crc kubenswrapper[4909]: E1126 08:48:48.510145 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:49:00 crc kubenswrapper[4909]: I1126 08:49:00.499937 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:49:00 crc kubenswrapper[4909]: E1126 08:49:00.500849 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.294329 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xxvgr"] Nov 26 08:49:04 crc kubenswrapper[4909]: E1126 08:49:04.309483 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="registry-server" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.309504 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="registry-server" Nov 26 08:49:04 crc kubenswrapper[4909]: E1126 08:49:04.309556 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="registry-server" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.309562 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="registry-server" Nov 26 08:49:04 crc kubenswrapper[4909]: E1126 08:49:04.309598 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="extract-content" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.309605 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="extract-content" Nov 26 08:49:04 crc kubenswrapper[4909]: E1126 08:49:04.309630 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="extract-content" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.309635 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="extract-content" Nov 26 08:49:04 crc kubenswrapper[4909]: E1126 08:49:04.309661 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="extract-utilities" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.309667 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="extract-utilities" Nov 26 08:49:04 crc kubenswrapper[4909]: E1126 08:49:04.309686 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="extract-utilities" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.309691 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="extract-utilities" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.310055 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da4f0e-6db6-4208-be70-247559c342af" containerName="registry-server" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.310185 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b58d2d-0abb-4c2a-af4f-671e9f123881" containerName="registry-server" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.312913 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.314742 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxvgr"] Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.408373 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-utilities\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.408548 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-catalog-content\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.408715 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjvg\" (UniqueName: \"kubernetes.io/projected/023daf89-5612-4d64-9fe3-59e61be8f59c-kube-api-access-xdjvg\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.511251 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-utilities\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.511658 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-catalog-content\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.511719 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjvg\" (UniqueName: \"kubernetes.io/projected/023daf89-5612-4d64-9fe3-59e61be8f59c-kube-api-access-xdjvg\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.512888 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-utilities\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.513408 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-catalog-content\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.532319 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjvg\" (UniqueName: \"kubernetes.io/projected/023daf89-5612-4d64-9fe3-59e61be8f59c-kube-api-access-xdjvg\") pod \"redhat-operators-xxvgr\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:04 crc kubenswrapper[4909]: I1126 08:49:04.668164 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:05 crc kubenswrapper[4909]: I1126 08:49:05.248207 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxvgr"] Nov 26 08:49:05 crc kubenswrapper[4909]: I1126 08:49:05.944454 4909 generic.go:334] "Generic (PLEG): container finished" podID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerID="2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c" exitCode=0 Nov 26 08:49:05 crc kubenswrapper[4909]: I1126 08:49:05.944546 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxvgr" event={"ID":"023daf89-5612-4d64-9fe3-59e61be8f59c","Type":"ContainerDied","Data":"2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c"} Nov 26 08:49:05 crc kubenswrapper[4909]: I1126 08:49:05.945780 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxvgr" event={"ID":"023daf89-5612-4d64-9fe3-59e61be8f59c","Type":"ContainerStarted","Data":"db82c7468408819bd42eac0ee275ae9cf6abeae31af5354a0d319efdea896048"} Nov 26 08:49:05 crc kubenswrapper[4909]: I1126 08:49:05.946709 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:49:07 crc kubenswrapper[4909]: I1126 08:49:07.965657 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxvgr" event={"ID":"023daf89-5612-4d64-9fe3-59e61be8f59c","Type":"ContainerStarted","Data":"e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789"} Nov 26 08:49:10 crc kubenswrapper[4909]: I1126 08:49:10.006815 4909 generic.go:334] "Generic (PLEG): container finished" podID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerID="e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789" exitCode=0 Nov 26 08:49:10 crc kubenswrapper[4909]: I1126 08:49:10.006883 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxvgr" event={"ID":"023daf89-5612-4d64-9fe3-59e61be8f59c","Type":"ContainerDied","Data":"e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789"} Nov 26 08:49:11 crc kubenswrapper[4909]: I1126 08:49:11.019049 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxvgr" event={"ID":"023daf89-5612-4d64-9fe3-59e61be8f59c","Type":"ContainerStarted","Data":"963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb"} Nov 26 08:49:11 crc kubenswrapper[4909]: I1126 08:49:11.040458 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xxvgr" podStartSLOduration=2.5140008700000003 podStartE2EDuration="7.040441268s" podCreationTimestamp="2025-11-26 08:49:04 +0000 UTC" firstStartedPulling="2025-11-26 08:49:05.946460263 +0000 UTC m=+6518.092671429" lastFinishedPulling="2025-11-26 08:49:10.472900651 +0000 UTC m=+6522.619111827" observedRunningTime="2025-11-26 08:49:11.039838922 +0000 UTC m=+6523.186050098" watchObservedRunningTime="2025-11-26 08:49:11.040441268 +0000 UTC m=+6523.186652434" Nov 26 08:49:12 crc kubenswrapper[4909]: I1126 08:49:12.499145 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:49:12 crc kubenswrapper[4909]: E1126 08:49:12.499761 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:49:14 crc kubenswrapper[4909]: I1126 08:49:14.668483 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:14 crc kubenswrapper[4909]: I1126 08:49:14.668895 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:15 crc kubenswrapper[4909]: I1126 08:49:15.757227 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xxvgr" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="registry-server" probeResult="failure" output=< Nov 26 08:49:15 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 08:49:15 crc kubenswrapper[4909]: > Nov 26 08:49:23 crc kubenswrapper[4909]: I1126 08:49:23.499148 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:49:23 crc kubenswrapper[4909]: E1126 08:49:23.500123 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:49:24 crc kubenswrapper[4909]: I1126 08:49:24.750120 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:24 crc kubenswrapper[4909]: I1126 08:49:24.800023 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:24 crc kubenswrapper[4909]: I1126 08:49:24.994685 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxvgr"] Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.164851 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xxvgr" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="registry-server" containerID="cri-o://963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb" gracePeriod=2 Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.675219 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.766415 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjvg\" (UniqueName: \"kubernetes.io/projected/023daf89-5612-4d64-9fe3-59e61be8f59c-kube-api-access-xdjvg\") pod \"023daf89-5612-4d64-9fe3-59e61be8f59c\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.766551 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-catalog-content\") pod \"023daf89-5612-4d64-9fe3-59e61be8f59c\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.766725 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-utilities\") pod \"023daf89-5612-4d64-9fe3-59e61be8f59c\" (UID: \"023daf89-5612-4d64-9fe3-59e61be8f59c\") " Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.767877 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-utilities" (OuterVolumeSpecName: "utilities") pod "023daf89-5612-4d64-9fe3-59e61be8f59c" (UID: "023daf89-5612-4d64-9fe3-59e61be8f59c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.774586 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023daf89-5612-4d64-9fe3-59e61be8f59c-kube-api-access-xdjvg" (OuterVolumeSpecName: "kube-api-access-xdjvg") pod "023daf89-5612-4d64-9fe3-59e61be8f59c" (UID: "023daf89-5612-4d64-9fe3-59e61be8f59c"). InnerVolumeSpecName "kube-api-access-xdjvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.859155 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "023daf89-5612-4d64-9fe3-59e61be8f59c" (UID: "023daf89-5612-4d64-9fe3-59e61be8f59c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.869171 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.869296 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjvg\" (UniqueName: \"kubernetes.io/projected/023daf89-5612-4d64-9fe3-59e61be8f59c-kube-api-access-xdjvg\") on node \"crc\" DevicePath \"\"" Nov 26 08:49:26 crc kubenswrapper[4909]: I1126 08:49:26.869356 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023daf89-5612-4d64-9fe3-59e61be8f59c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.179169 4909 generic.go:334] "Generic (PLEG): container finished" podID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerID="963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb" exitCode=0 Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.179218 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxvgr" event={"ID":"023daf89-5612-4d64-9fe3-59e61be8f59c","Type":"ContainerDied","Data":"963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb"} Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.179248 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxvgr" event={"ID":"023daf89-5612-4d64-9fe3-59e61be8f59c","Type":"ContainerDied","Data":"db82c7468408819bd42eac0ee275ae9cf6abeae31af5354a0d319efdea896048"} Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.179270 4909 scope.go:117] "RemoveContainer" containerID="963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.179424 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxvgr" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.209523 4909 scope.go:117] "RemoveContainer" containerID="e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.221726 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxvgr"] Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.229971 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xxvgr"] Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.258961 4909 scope.go:117] "RemoveContainer" containerID="2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.277744 4909 scope.go:117] "RemoveContainer" containerID="963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb" Nov 26 08:49:27 crc kubenswrapper[4909]: E1126 08:49:27.278271 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb\": container with ID starting with 963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb not found: ID does not exist" containerID="963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.278306 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb"} err="failed to get container status \"963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb\": rpc error: code = NotFound desc = could not find container \"963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb\": container with ID starting with 963ada270e09bcbaf5d7bb0737fbe01c79e31e4bb1c108ad661469e3609f09fb not found: ID does not exist" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.278331 4909 scope.go:117] "RemoveContainer" containerID="e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789" Nov 26 08:49:27 crc kubenswrapper[4909]: E1126 08:49:27.278660 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789\": container with ID starting with e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789 not found: ID does not exist" containerID="e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.278718 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789"} err="failed to get container status \"e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789\": rpc error: code = NotFound desc = could not find container \"e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789\": container with ID starting with e4c88b082f29436bfe0c536aef9c490bcfa8f229dd64602ff200f0247272c789 not found: ID does not exist" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.278762 4909 scope.go:117] "RemoveContainer" containerID="2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c" Nov 26 08:49:27 crc kubenswrapper[4909]: E1126 08:49:27.279199 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c\": container with ID starting with 2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c not found: ID does not exist" containerID="2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c" Nov 26 08:49:27 crc kubenswrapper[4909]: I1126 08:49:27.279323 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c"} err="failed to get container status \"2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c\": rpc error: code = NotFound desc = could not find container \"2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c\": container with ID starting with 2281e2e3dc96b97007d989df399fbb01f7ddba4e7653130ba99c932333fbf45c not found: ID does not exist" Nov 26 08:49:28 crc kubenswrapper[4909]: I1126 08:49:28.515826 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" path="/var/lib/kubelet/pods/023daf89-5612-4d64-9fe3-59e61be8f59c/volumes" Nov 26 08:49:35 crc kubenswrapper[4909]: I1126 08:49:35.499665 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:49:35 crc kubenswrapper[4909]: E1126 08:49:35.500636 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:49:47 crc kubenswrapper[4909]: I1126 08:49:47.500570 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:49:47 crc kubenswrapper[4909]: E1126 08:49:47.501809 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:49:58 crc kubenswrapper[4909]: I1126 08:49:58.511159 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:49:58 crc kubenswrapper[4909]: E1126 08:49:58.512208 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.599532 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ls7qk"] Nov 26 08:50:00 crc kubenswrapper[4909]: E1126 08:50:00.600254 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="registry-server" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.600265 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="registry-server" Nov 26 08:50:00 crc kubenswrapper[4909]: E1126 08:50:00.600288 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="extract-utilities" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.600309 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="extract-utilities" Nov 26 08:50:00 crc kubenswrapper[4909]: E1126 08:50:00.600335 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="extract-content" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.600342 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="extract-content" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.600554 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="023daf89-5612-4d64-9fe3-59e61be8f59c" containerName="registry-server" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.602086 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.630148 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ls7qk"] Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.660771 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-catalog-content\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.661144 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-utilities\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.661184 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wplfz\" (UniqueName: \"kubernetes.io/projected/a9347df7-d3df-4fe1-acd1-38425ec97f54-kube-api-access-wplfz\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.763496 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-utilities\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.763704 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplfz\" (UniqueName: \"kubernetes.io/projected/a9347df7-d3df-4fe1-acd1-38425ec97f54-kube-api-access-wplfz\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.763840 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-catalog-content\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.764169 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-catalog-content\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.764168 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-utilities\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.785227 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplfz\" (UniqueName: \"kubernetes.io/projected/a9347df7-d3df-4fe1-acd1-38425ec97f54-kube-api-access-wplfz\") pod \"certified-operators-ls7qk\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:00 crc kubenswrapper[4909]: I1126 08:50:00.945268 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:01 crc kubenswrapper[4909]: I1126 08:50:01.507146 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ls7qk"] Nov 26 08:50:01 crc kubenswrapper[4909]: I1126 08:50:01.587688 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ls7qk" event={"ID":"a9347df7-d3df-4fe1-acd1-38425ec97f54","Type":"ContainerStarted","Data":"1e729a3b4a6e7c311acb863edb73ecc8f5fb6384b1912d6e489f787da30d8822"} Nov 26 08:50:02 crc kubenswrapper[4909]: I1126 08:50:02.598034 4909 generic.go:334] "Generic (PLEG): container finished" podID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerID="8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900" exitCode=0 Nov 26 08:50:02 crc kubenswrapper[4909]: I1126 08:50:02.598103 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ls7qk" event={"ID":"a9347df7-d3df-4fe1-acd1-38425ec97f54","Type":"ContainerDied","Data":"8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900"} Nov 26 08:50:03 crc kubenswrapper[4909]: I1126 08:50:03.608177 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ls7qk" event={"ID":"a9347df7-d3df-4fe1-acd1-38425ec97f54","Type":"ContainerStarted","Data":"c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c"} Nov 26 08:50:04 crc kubenswrapper[4909]: I1126 08:50:04.619994 4909 generic.go:334] "Generic (PLEG): container finished" podID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerID="c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c" exitCode=0 Nov 26 08:50:04 crc kubenswrapper[4909]: I1126 08:50:04.620035 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ls7qk" event={"ID":"a9347df7-d3df-4fe1-acd1-38425ec97f54","Type":"ContainerDied","Data":"c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c"} Nov 26 08:50:06 crc kubenswrapper[4909]: I1126 08:50:06.687620 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ls7qk" event={"ID":"a9347df7-d3df-4fe1-acd1-38425ec97f54","Type":"ContainerStarted","Data":"e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4"} Nov 26 08:50:10 crc kubenswrapper[4909]: I1126 08:50:10.499271 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:50:10 crc kubenswrapper[4909]: E1126 08:50:10.500741 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:50:10 crc kubenswrapper[4909]: I1126 08:50:10.947258 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:10 crc kubenswrapper[4909]: I1126 08:50:10.947313 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:11 crc kubenswrapper[4909]: I1126 08:50:11.016510 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:11 crc kubenswrapper[4909]: I1126 08:50:11.043805 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ls7qk" podStartSLOduration=8.027454319 podStartE2EDuration="11.04378193s" podCreationTimestamp="2025-11-26 08:50:00 +0000 UTC" firstStartedPulling="2025-11-26 08:50:02.600459615 +0000 UTC m=+6574.746670781" lastFinishedPulling="2025-11-26 08:50:05.616787226 +0000 UTC m=+6577.762998392" observedRunningTime="2025-11-26 08:50:06.734034284 +0000 UTC m=+6578.880245450" watchObservedRunningTime="2025-11-26 08:50:11.04378193 +0000 UTC m=+6583.189993096" Nov 26 08:50:11 crc kubenswrapper[4909]: I1126 08:50:11.802166 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:11 crc kubenswrapper[4909]: I1126 08:50:11.916998 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ls7qk"] Nov 26 08:50:13 crc kubenswrapper[4909]: I1126 08:50:13.758291 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ls7qk" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="registry-server" containerID="cri-o://e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4" gracePeriod=2 Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.360690 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.497229 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-catalog-content\") pod \"a9347df7-d3df-4fe1-acd1-38425ec97f54\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.497371 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-utilities\") pod \"a9347df7-d3df-4fe1-acd1-38425ec97f54\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.497866 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wplfz\" (UniqueName: \"kubernetes.io/projected/a9347df7-d3df-4fe1-acd1-38425ec97f54-kube-api-access-wplfz\") pod \"a9347df7-d3df-4fe1-acd1-38425ec97f54\" (UID: \"a9347df7-d3df-4fe1-acd1-38425ec97f54\") " Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.498329 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-utilities" (OuterVolumeSpecName: "utilities") pod "a9347df7-d3df-4fe1-acd1-38425ec97f54" (UID: "a9347df7-d3df-4fe1-acd1-38425ec97f54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.499182 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.505804 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9347df7-d3df-4fe1-acd1-38425ec97f54-kube-api-access-wplfz" (OuterVolumeSpecName: "kube-api-access-wplfz") pod "a9347df7-d3df-4fe1-acd1-38425ec97f54" (UID: "a9347df7-d3df-4fe1-acd1-38425ec97f54"). InnerVolumeSpecName "kube-api-access-wplfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.541384 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9347df7-d3df-4fe1-acd1-38425ec97f54" (UID: "a9347df7-d3df-4fe1-acd1-38425ec97f54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.601205 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9347df7-d3df-4fe1-acd1-38425ec97f54-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.601241 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wplfz\" (UniqueName: \"kubernetes.io/projected/a9347df7-d3df-4fe1-acd1-38425ec97f54-kube-api-access-wplfz\") on node \"crc\" DevicePath \"\"" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.775314 4909 generic.go:334] "Generic (PLEG): container finished" podID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerID="e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4" exitCode=0 Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.776481 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ls7qk" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.776492 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ls7qk" event={"ID":"a9347df7-d3df-4fe1-acd1-38425ec97f54","Type":"ContainerDied","Data":"e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4"} Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.777793 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ls7qk" event={"ID":"a9347df7-d3df-4fe1-acd1-38425ec97f54","Type":"ContainerDied","Data":"1e729a3b4a6e7c311acb863edb73ecc8f5fb6384b1912d6e489f787da30d8822"} Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.777846 4909 scope.go:117] "RemoveContainer" containerID="e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.809247 4909 scope.go:117] "RemoveContainer" containerID="c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.820767 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ls7qk"] Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.834378 4909 scope.go:117] "RemoveContainer" containerID="8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.838919 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ls7qk"] Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.903029 4909 scope.go:117] "RemoveContainer" containerID="e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4" Nov 26 08:50:14 crc kubenswrapper[4909]: E1126 08:50:14.904021 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4\": container with ID starting with e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4 not found: ID does not exist" containerID="e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.904049 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4"} err="failed to get container status \"e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4\": rpc error: code = NotFound desc = could not find container \"e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4\": container with ID starting with e170441b009456363cd16ec85b6e0d6ee62ae9ab277fdeb0fa087242192999a4 not found: ID does not exist" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.904073 4909 scope.go:117] "RemoveContainer" containerID="c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c" Nov 26 08:50:14 crc kubenswrapper[4909]: E1126 08:50:14.904460 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c\": container with ID starting with c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c not found: ID does not exist" containerID="c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.904505 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c"} err="failed to get container status \"c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c\": rpc error: code = NotFound desc = could not find container \"c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c\": container with ID starting with c75717e98c539de28f4fbb143a03029f060885740f6ca2fcd993c7886f65c26c not found: ID does not exist" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.904532 4909 scope.go:117] "RemoveContainer" containerID="8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900" Nov 26 08:50:14 crc kubenswrapper[4909]: E1126 08:50:14.904858 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900\": container with ID starting with 8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900 not found: ID does not exist" containerID="8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900" Nov 26 08:50:14 crc kubenswrapper[4909]: I1126 08:50:14.904897 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900"} err="failed to get container status \"8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900\": rpc error: code = NotFound desc = could not find container \"8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900\": container with ID starting with 8fabd9b23644a36c41619ccaf2ac3d78048ea356a5982cee005b9ff3c7482900 not found: ID does not exist" Nov 26 08:50:16 crc kubenswrapper[4909]: I1126 08:50:16.514768 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" path="/var/lib/kubelet/pods/a9347df7-d3df-4fe1-acd1-38425ec97f54/volumes" Nov 26 08:50:21 crc kubenswrapper[4909]: I1126 08:50:21.499046 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:50:21 crc kubenswrapper[4909]: E1126 08:50:21.499545 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:50:33 crc kubenswrapper[4909]: I1126 08:50:33.499430 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:50:33 crc kubenswrapper[4909]: E1126 08:50:33.500138 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:50:47 crc kubenswrapper[4909]: I1126 08:50:47.500526 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:50:47 crc kubenswrapper[4909]: E1126 08:50:47.501689 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:50:48 crc kubenswrapper[4909]: I1126 08:50:48.087095 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-5l2lv"] Nov 26 08:50:48 crc kubenswrapper[4909]: I1126 08:50:48.100691 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-5l2lv"] Nov 26 08:50:48 crc kubenswrapper[4909]: I1126 08:50:48.516926 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf71fe7-e97d-4b60-9af7-a00c41f7d141" path="/var/lib/kubelet/pods/edf71fe7-e97d-4b60-9af7-a00c41f7d141/volumes" Nov 26 08:50:58 crc kubenswrapper[4909]: I1126 08:50:58.055894 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-fc5d-account-create-t8f2m"] Nov 26 08:50:58 crc kubenswrapper[4909]: I1126 08:50:58.074273 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-fc5d-account-create-t8f2m"] Nov 26 08:50:58 crc kubenswrapper[4909]: I1126 08:50:58.525643 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994257ba-d9f2-49c5-bc46-ef44428bdbe9" path="/var/lib/kubelet/pods/994257ba-d9f2-49c5-bc46-ef44428bdbe9/volumes" Nov 26 08:51:02 crc kubenswrapper[4909]: I1126 08:51:02.499093 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:51:02 crc kubenswrapper[4909]: E1126 08:51:02.499984 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:51:09 crc kubenswrapper[4909]: I1126 08:51:09.054702 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-jhvlc"] Nov 26 08:51:09 crc kubenswrapper[4909]: I1126 08:51:09.066552 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-jhvlc"] Nov 26 08:51:10 crc kubenswrapper[4909]: I1126 08:51:10.519896 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fa13bf-535b-4d94-9f7a-87f8ab536ad7" path="/var/lib/kubelet/pods/a1fa13bf-535b-4d94-9f7a-87f8ab536ad7/volumes" Nov 26 08:51:14 crc kubenswrapper[4909]: I1126 08:51:14.499062 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:51:15 crc kubenswrapper[4909]: I1126 08:51:15.437817 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"c4a01709a31556cba47cfbc7acf4fffb334a8059c80b7836f817a64be470f2c6"} Nov 26 08:51:26 crc kubenswrapper[4909]: I1126 08:51:26.048858 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-5d4fw"] Nov 26 08:51:26 crc kubenswrapper[4909]: I1126 08:51:26.058242 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-5d4fw"] Nov 26 08:51:26 crc kubenswrapper[4909]: I1126 08:51:26.513563 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a92a58e-edd3-4da6-bbd3-c7dc41189ab5" path="/var/lib/kubelet/pods/5a92a58e-edd3-4da6-bbd3-c7dc41189ab5/volumes" Nov 26 08:51:36 crc kubenswrapper[4909]: I1126 08:51:36.035312 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-51b4-account-create-4dxrc"] Nov 26 08:51:36 crc kubenswrapper[4909]: I1126 08:51:36.046154 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-51b4-account-create-4dxrc"] Nov 26 08:51:36 crc kubenswrapper[4909]: I1126 08:51:36.522178 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e3c678-4a1f-4fa3-9e4d-cb414adcca54" path="/var/lib/kubelet/pods/c2e3c678-4a1f-4fa3-9e4d-cb414adcca54/volumes" Nov 26 08:51:45 crc kubenswrapper[4909]: I1126 08:51:45.714381 4909 scope.go:117] "RemoveContainer" containerID="1e873411e0fef74689c76ae55bc2f3e6d2f99623e686822a67cc3e4ed5b49fef" Nov 26 08:51:45 crc kubenswrapper[4909]: I1126 08:51:45.769123 4909 scope.go:117] "RemoveContainer" containerID="8348115e18368cbe9118777987dabd01579cc87b4bbf282501e8defe512d27cd" Nov 26 08:51:45 crc kubenswrapper[4909]: I1126 08:51:45.847032 4909 scope.go:117] "RemoveContainer" containerID="86e3a0fc39e157287bf91235111cd2ea6eec93a963dadada22da5aa12f9182b7" Nov 26 08:51:45 crc kubenswrapper[4909]: I1126 08:51:45.887839 4909 scope.go:117] "RemoveContainer" containerID="e28686a850ebcf4a11463e12bfc074f25f1cc1a3a80543d004920b96839f6897" Nov 26 08:51:45 crc kubenswrapper[4909]: I1126 08:51:45.925547 4909 scope.go:117] "RemoveContainer" containerID="82008ba324306166c1092f5fd6df1e3c53ce1079c21a51b88fc6d5eec46d6a8c" Nov 26 08:51:48 crc kubenswrapper[4909]: I1126 08:51:48.044227 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-8v2j5"] Nov 26 08:51:48 crc kubenswrapper[4909]: I1126 08:51:48.053707 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-8v2j5"] Nov 26 08:51:48 crc kubenswrapper[4909]: I1126 08:51:48.521316 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4237001-afe3-49f8-84cd-6772277c3020" path="/var/lib/kubelet/pods/f4237001-afe3-49f8-84cd-6772277c3020/volumes" Nov 26 08:52:46 crc kubenswrapper[4909]: I1126 08:52:46.095522 4909 scope.go:117] "RemoveContainer" containerID="b25932b4848b63e9f97254d2fe469f8e80fdf12eb8f008e65674b14d60a76466" Nov 26 08:53:37 crc kubenswrapper[4909]: I1126 08:53:37.301759 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:53:37 crc kubenswrapper[4909]: I1126 08:53:37.302505 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:54:07 crc kubenswrapper[4909]: I1126 08:54:07.300746 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:54:07 crc kubenswrapper[4909]: I1126 08:54:07.301521 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.301607 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.302226 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.302279 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.303226 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4a01709a31556cba47cfbc7acf4fffb334a8059c80b7836f817a64be470f2c6"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.303309 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://c4a01709a31556cba47cfbc7acf4fffb334a8059c80b7836f817a64be470f2c6" gracePeriod=600 Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.867067 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="c4a01709a31556cba47cfbc7acf4fffb334a8059c80b7836f817a64be470f2c6" exitCode=0 Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.867718 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"c4a01709a31556cba47cfbc7acf4fffb334a8059c80b7836f817a64be470f2c6"} Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.867972 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7"} Nov 26 08:54:37 crc kubenswrapper[4909]: I1126 08:54:37.868003 4909 scope.go:117] "RemoveContainer" containerID="9f5f14f5642d7860e812d62f3c014c3950e5c48a30a3296b15b7653beade5622" Nov 26 08:54:40 crc kubenswrapper[4909]: I1126 08:54:40.908683 4909 generic.go:334] "Generic (PLEG): container finished" podID="e0f2810b-5183-4439-88f2-7c47010a5aa9" containerID="d030d9b92413a5c9af265aeda2917e5b7fa55d8744223b98fc160eab9976b311" exitCode=0 Nov 26 08:54:40 crc kubenswrapper[4909]: I1126 08:54:40.908744 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" event={"ID":"e0f2810b-5183-4439-88f2-7c47010a5aa9","Type":"ContainerDied","Data":"d030d9b92413a5c9af265aeda2917e5b7fa55d8744223b98fc160eab9976b311"} Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.359226 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.399820 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ssh-key\") pod \"e0f2810b-5183-4439-88f2-7c47010a5aa9\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.400013 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-inventory\") pod \"e0f2810b-5183-4439-88f2-7c47010a5aa9\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.400038 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-tripleo-cleanup-combined-ca-bundle\") pod \"e0f2810b-5183-4439-88f2-7c47010a5aa9\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.400173 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76z9w\" (UniqueName: \"kubernetes.io/projected/e0f2810b-5183-4439-88f2-7c47010a5aa9-kube-api-access-76z9w\") pod \"e0f2810b-5183-4439-88f2-7c47010a5aa9\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.400244 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ceph\") pod \"e0f2810b-5183-4439-88f2-7c47010a5aa9\" (UID: \"e0f2810b-5183-4439-88f2-7c47010a5aa9\") " Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.407136 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ceph" (OuterVolumeSpecName: "ceph") pod "e0f2810b-5183-4439-88f2-7c47010a5aa9" (UID: "e0f2810b-5183-4439-88f2-7c47010a5aa9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.407160 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "e0f2810b-5183-4439-88f2-7c47010a5aa9" (UID: "e0f2810b-5183-4439-88f2-7c47010a5aa9"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.408177 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f2810b-5183-4439-88f2-7c47010a5aa9-kube-api-access-76z9w" (OuterVolumeSpecName: "kube-api-access-76z9w") pod "e0f2810b-5183-4439-88f2-7c47010a5aa9" (UID: "e0f2810b-5183-4439-88f2-7c47010a5aa9"). InnerVolumeSpecName "kube-api-access-76z9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.436219 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-inventory" (OuterVolumeSpecName: "inventory") pod "e0f2810b-5183-4439-88f2-7c47010a5aa9" (UID: "e0f2810b-5183-4439-88f2-7c47010a5aa9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.451425 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0f2810b-5183-4439-88f2-7c47010a5aa9" (UID: "e0f2810b-5183-4439-88f2-7c47010a5aa9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.504832 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.504871 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.504884 4909 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.504909 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76z9w\" (UniqueName: \"kubernetes.io/projected/e0f2810b-5183-4439-88f2-7c47010a5aa9-kube-api-access-76z9w\") on node \"crc\" DevicePath \"\"" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.504921 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0f2810b-5183-4439-88f2-7c47010a5aa9-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.953414 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" event={"ID":"e0f2810b-5183-4439-88f2-7c47010a5aa9","Type":"ContainerDied","Data":"a427b6bd81fec3680d9f3438d3039e72c92be2ba3636a81f0107a99bb35f3a87"} Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.953839 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a427b6bd81fec3680d9f3438d3039e72c92be2ba3636a81f0107a99bb35f3a87" Nov 26 08:54:42 crc kubenswrapper[4909]: I1126 08:54:42.953973 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.923195 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-dclj2"] Nov 26 08:54:52 crc kubenswrapper[4909]: E1126 08:54:52.924288 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="registry-server" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.924305 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="registry-server" Nov 26 08:54:52 crc kubenswrapper[4909]: E1126 08:54:52.924396 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="extract-content" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.924405 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="extract-content" Nov 26 08:54:52 crc kubenswrapper[4909]: E1126 08:54:52.924424 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="extract-utilities" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.924444 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="extract-utilities" Nov 26 08:54:52 crc kubenswrapper[4909]: E1126 08:54:52.924471 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f2810b-5183-4439-88f2-7c47010a5aa9" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.924482 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f2810b-5183-4439-88f2-7c47010a5aa9" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.924769 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f2810b-5183-4439-88f2-7c47010a5aa9" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.924794 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9347df7-d3df-4fe1-acd1-38425ec97f54" containerName="registry-server" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.925785 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.928644 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.928743 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.928751 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.930365 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 08:54:52 crc kubenswrapper[4909]: I1126 08:54:52.944368 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-dclj2"] Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.020657 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.020799 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.020856 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-inventory\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.020895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx95p\" (UniqueName: \"kubernetes.io/projected/03058c3f-9b59-4c2c-ada7-8291a75dae01-kube-api-access-dx95p\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.021003 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ceph\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.122969 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-inventory\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.123033 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx95p\" (UniqueName: \"kubernetes.io/projected/03058c3f-9b59-4c2c-ada7-8291a75dae01-kube-api-access-dx95p\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.123097 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ceph\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.123150 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.123229 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.129415 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.129478 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.131071 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-inventory\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.134093 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ceph\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.138068 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx95p\" (UniqueName: \"kubernetes.io/projected/03058c3f-9b59-4c2c-ada7-8291a75dae01-kube-api-access-dx95p\") pod \"bootstrap-openstack-openstack-cell1-dclj2\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.256740 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.809707 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-dclj2"] Nov 26 08:54:53 crc kubenswrapper[4909]: I1126 08:54:53.823105 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 08:54:54 crc kubenswrapper[4909]: I1126 08:54:54.052067 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" event={"ID":"03058c3f-9b59-4c2c-ada7-8291a75dae01","Type":"ContainerStarted","Data":"744f175985a6b280896eff60e2cdc4be6d5cd934d18392a6556cab1fce04e5c2"} Nov 26 08:54:55 crc kubenswrapper[4909]: I1126 08:54:55.063073 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" event={"ID":"03058c3f-9b59-4c2c-ada7-8291a75dae01","Type":"ContainerStarted","Data":"35dce27f725db138232b57be05dd6932438191061b8b868a21a93c61a1a5016d"} Nov 26 08:54:55 crc kubenswrapper[4909]: I1126 08:54:55.082117 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" podStartSLOduration=2.6182016790000002 podStartE2EDuration="3.082096822s" podCreationTimestamp="2025-11-26 08:54:52 +0000 UTC" firstStartedPulling="2025-11-26 08:54:53.822832677 +0000 UTC m=+6865.969043843" lastFinishedPulling="2025-11-26 08:54:54.28672782 +0000 UTC m=+6866.432938986" observedRunningTime="2025-11-26 08:54:55.078055392 +0000 UTC m=+6867.224266568" watchObservedRunningTime="2025-11-26 08:54:55.082096822 +0000 UTC m=+6867.228307998" Nov 26 08:56:37 crc kubenswrapper[4909]: I1126 08:56:37.301195 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:56:37 crc kubenswrapper[4909]: I1126 08:56:37.301934 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.404208 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n2nw8"] Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.409852 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.415382 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2nw8"] Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.505021 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw6g4\" (UniqueName: \"kubernetes.io/projected/3d8c9339-1661-4620-977d-eb2ecdc7a976-kube-api-access-xw6g4\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.505267 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-catalog-content\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.505293 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-utilities\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.607956 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-catalog-content\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.608017 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-utilities\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.608105 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw6g4\" (UniqueName: \"kubernetes.io/projected/3d8c9339-1661-4620-977d-eb2ecdc7a976-kube-api-access-xw6g4\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.608563 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-catalog-content\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.608852 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-utilities\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.638638 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw6g4\" (UniqueName: \"kubernetes.io/projected/3d8c9339-1661-4620-977d-eb2ecdc7a976-kube-api-access-xw6g4\") pod \"community-operators-n2nw8\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:47 crc kubenswrapper[4909]: I1126 08:56:47.741412 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:48 crc kubenswrapper[4909]: I1126 08:56:48.312700 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2nw8"] Nov 26 08:56:49 crc kubenswrapper[4909]: I1126 08:56:49.327798 4909 generic.go:334] "Generic (PLEG): container finished" podID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerID="9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999" exitCode=0 Nov 26 08:56:49 crc kubenswrapper[4909]: I1126 08:56:49.327849 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2nw8" event={"ID":"3d8c9339-1661-4620-977d-eb2ecdc7a976","Type":"ContainerDied","Data":"9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999"} Nov 26 08:56:49 crc kubenswrapper[4909]: I1126 08:56:49.328102 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2nw8" event={"ID":"3d8c9339-1661-4620-977d-eb2ecdc7a976","Type":"ContainerStarted","Data":"71cd853db9ad395efa298efb6dc90bb5d2aad422ed06ea86ea1aa33baec0d0bf"} Nov 26 08:56:51 crc kubenswrapper[4909]: I1126 08:56:51.362050 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2nw8" event={"ID":"3d8c9339-1661-4620-977d-eb2ecdc7a976","Type":"ContainerStarted","Data":"57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50"} Nov 26 08:56:52 crc kubenswrapper[4909]: I1126 08:56:52.373572 4909 generic.go:334] "Generic (PLEG): container finished" podID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerID="57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50" exitCode=0 Nov 26 08:56:52 crc kubenswrapper[4909]: I1126 08:56:52.373662 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2nw8" event={"ID":"3d8c9339-1661-4620-977d-eb2ecdc7a976","Type":"ContainerDied","Data":"57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50"} Nov 26 08:56:53 crc kubenswrapper[4909]: I1126 08:56:53.387106 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2nw8" event={"ID":"3d8c9339-1661-4620-977d-eb2ecdc7a976","Type":"ContainerStarted","Data":"d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d"} Nov 26 08:56:53 crc kubenswrapper[4909]: I1126 08:56:53.411531 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n2nw8" podStartSLOduration=2.85183675 podStartE2EDuration="6.411511128s" podCreationTimestamp="2025-11-26 08:56:47 +0000 UTC" firstStartedPulling="2025-11-26 08:56:49.329583016 +0000 UTC m=+6981.475794182" lastFinishedPulling="2025-11-26 08:56:52.889257394 +0000 UTC m=+6985.035468560" observedRunningTime="2025-11-26 08:56:53.404456055 +0000 UTC m=+6985.550667251" watchObservedRunningTime="2025-11-26 08:56:53.411511128 +0000 UTC m=+6985.557722304" Nov 26 08:56:53 crc kubenswrapper[4909]: I1126 08:56:53.967517 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nb9ch"] Nov 26 08:56:53 crc kubenswrapper[4909]: I1126 08:56:53.970119 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:53 crc kubenswrapper[4909]: I1126 08:56:53.981842 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb9ch"] Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.056248 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46889\" (UniqueName: \"kubernetes.io/projected/113e1875-f6d6-4b80-9ea3-07b0329c311e-kube-api-access-46889\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.056357 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-utilities\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.056378 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-catalog-content\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.159427 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-utilities\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.159477 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-catalog-content\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.159704 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46889\" (UniqueName: \"kubernetes.io/projected/113e1875-f6d6-4b80-9ea3-07b0329c311e-kube-api-access-46889\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.160134 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-catalog-content\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.160158 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-utilities\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.182302 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46889\" (UniqueName: \"kubernetes.io/projected/113e1875-f6d6-4b80-9ea3-07b0329c311e-kube-api-access-46889\") pod \"redhat-marketplace-nb9ch\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:54 crc kubenswrapper[4909]: I1126 08:56:54.307772 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:56:55 crc kubenswrapper[4909]: I1126 08:56:54.841422 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb9ch"] Nov 26 08:56:55 crc kubenswrapper[4909]: W1126 08:56:54.845784 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113e1875_f6d6_4b80_9ea3_07b0329c311e.slice/crio-9b05f473921068e60bed8e0cd3587f7949f7ef3dce22d742bc931f6b3408d4d2 WatchSource:0}: Error finding container 9b05f473921068e60bed8e0cd3587f7949f7ef3dce22d742bc931f6b3408d4d2: Status 404 returned error can't find the container with id 9b05f473921068e60bed8e0cd3587f7949f7ef3dce22d742bc931f6b3408d4d2 Nov 26 08:56:55 crc kubenswrapper[4909]: I1126 08:56:55.419710 4909 generic.go:334] "Generic (PLEG): container finished" podID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerID="da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f" exitCode=0 Nov 26 08:56:55 crc kubenswrapper[4909]: I1126 08:56:55.419873 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb9ch" event={"ID":"113e1875-f6d6-4b80-9ea3-07b0329c311e","Type":"ContainerDied","Data":"da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f"} Nov 26 08:56:55 crc kubenswrapper[4909]: I1126 08:56:55.420121 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb9ch" event={"ID":"113e1875-f6d6-4b80-9ea3-07b0329c311e","Type":"ContainerStarted","Data":"9b05f473921068e60bed8e0cd3587f7949f7ef3dce22d742bc931f6b3408d4d2"} Nov 26 08:56:57 crc kubenswrapper[4909]: I1126 08:56:57.452845 4909 generic.go:334] "Generic (PLEG): container finished" podID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerID="c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f" exitCode=0 Nov 26 08:56:57 crc kubenswrapper[4909]: I1126 08:56:57.453036 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb9ch" event={"ID":"113e1875-f6d6-4b80-9ea3-07b0329c311e","Type":"ContainerDied","Data":"c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f"} Nov 26 08:56:57 crc kubenswrapper[4909]: I1126 08:56:57.742581 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:57 crc kubenswrapper[4909]: I1126 08:56:57.742713 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:57 crc kubenswrapper[4909]: I1126 08:56:57.822431 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:58 crc kubenswrapper[4909]: I1126 08:56:58.525652 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb9ch" event={"ID":"113e1875-f6d6-4b80-9ea3-07b0329c311e","Type":"ContainerStarted","Data":"e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af"} Nov 26 08:56:58 crc kubenswrapper[4909]: I1126 08:56:58.549614 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nb9ch" podStartSLOduration=3.083107369 podStartE2EDuration="5.549567853s" podCreationTimestamp="2025-11-26 08:56:53 +0000 UTC" firstStartedPulling="2025-11-26 08:56:55.422066865 +0000 UTC m=+6987.568278071" lastFinishedPulling="2025-11-26 08:56:57.888527389 +0000 UTC m=+6990.034738555" observedRunningTime="2025-11-26 08:56:58.534372108 +0000 UTC m=+6990.680583294" watchObservedRunningTime="2025-11-26 08:56:58.549567853 +0000 UTC m=+6990.695779019" Nov 26 08:56:58 crc kubenswrapper[4909]: I1126 08:56:58.577532 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:56:59 crc kubenswrapper[4909]: I1126 08:56:59.352130 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2nw8"] Nov 26 08:57:00 crc kubenswrapper[4909]: I1126 08:57:00.535336 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n2nw8" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="registry-server" containerID="cri-o://d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d" gracePeriod=2 Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.045187 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.219883 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw6g4\" (UniqueName: \"kubernetes.io/projected/3d8c9339-1661-4620-977d-eb2ecdc7a976-kube-api-access-xw6g4\") pod \"3d8c9339-1661-4620-977d-eb2ecdc7a976\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.220022 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-catalog-content\") pod \"3d8c9339-1661-4620-977d-eb2ecdc7a976\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.220241 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-utilities\") pod \"3d8c9339-1661-4620-977d-eb2ecdc7a976\" (UID: \"3d8c9339-1661-4620-977d-eb2ecdc7a976\") " Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.220868 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-utilities" (OuterVolumeSpecName: "utilities") pod "3d8c9339-1661-4620-977d-eb2ecdc7a976" (UID: "3d8c9339-1661-4620-977d-eb2ecdc7a976"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.221425 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.232328 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8c9339-1661-4620-977d-eb2ecdc7a976-kube-api-access-xw6g4" (OuterVolumeSpecName: "kube-api-access-xw6g4") pod "3d8c9339-1661-4620-977d-eb2ecdc7a976" (UID: "3d8c9339-1661-4620-977d-eb2ecdc7a976"). InnerVolumeSpecName "kube-api-access-xw6g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.268829 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d8c9339-1661-4620-977d-eb2ecdc7a976" (UID: "3d8c9339-1661-4620-977d-eb2ecdc7a976"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.324279 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw6g4\" (UniqueName: \"kubernetes.io/projected/3d8c9339-1661-4620-977d-eb2ecdc7a976-kube-api-access-xw6g4\") on node \"crc\" DevicePath \"\"" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.324338 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c9339-1661-4620-977d-eb2ecdc7a976-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.545376 4909 generic.go:334] "Generic (PLEG): container finished" podID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerID="d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d" exitCode=0 Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.545418 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2nw8" event={"ID":"3d8c9339-1661-4620-977d-eb2ecdc7a976","Type":"ContainerDied","Data":"d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d"} Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.545430 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2nw8" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.545441 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2nw8" event={"ID":"3d8c9339-1661-4620-977d-eb2ecdc7a976","Type":"ContainerDied","Data":"71cd853db9ad395efa298efb6dc90bb5d2aad422ed06ea86ea1aa33baec0d0bf"} Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.545461 4909 scope.go:117] "RemoveContainer" containerID="d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.593048 4909 scope.go:117] "RemoveContainer" containerID="57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.593524 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2nw8"] Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.602851 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n2nw8"] Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.629425 4909 scope.go:117] "RemoveContainer" containerID="9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.683846 4909 scope.go:117] "RemoveContainer" containerID="d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d" Nov 26 08:57:01 crc kubenswrapper[4909]: E1126 08:57:01.684413 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d\": container with ID starting with d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d not found: ID does not exist" containerID="d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.684466 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d"} err="failed to get container status \"d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d\": rpc error: code = NotFound desc = could not find container \"d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d\": container with ID starting with d83bf08aa51b023cec5e89f5a05061ffbdea7725617e3377aded544ce951798d not found: ID does not exist" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.684494 4909 scope.go:117] "RemoveContainer" containerID="57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50" Nov 26 08:57:01 crc kubenswrapper[4909]: E1126 08:57:01.685121 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50\": container with ID starting with 57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50 not found: ID does not exist" containerID="57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.685158 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50"} err="failed to get container status \"57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50\": rpc error: code = NotFound desc = could not find container \"57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50\": container with ID starting with 57df80191f1d9aa200b687190bdf87e9d2908d4f0fc3c9a7dbe1bd66b5ec5c50 not found: ID does not exist" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.685205 4909 scope.go:117] "RemoveContainer" containerID="9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999" Nov 26 08:57:01 crc kubenswrapper[4909]: E1126 08:57:01.685532 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999\": container with ID starting with 9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999 not found: ID does not exist" containerID="9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999" Nov 26 08:57:01 crc kubenswrapper[4909]: I1126 08:57:01.685568 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999"} err="failed to get container status \"9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999\": rpc error: code = NotFound desc = could not find container \"9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999\": container with ID starting with 9ed79d0bb2657e02ced5c9bb12436980521c56ccbf770556479058f05bd1e999 not found: ID does not exist" Nov 26 08:57:02 crc kubenswrapper[4909]: I1126 08:57:02.515762 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" path="/var/lib/kubelet/pods/3d8c9339-1661-4620-977d-eb2ecdc7a976/volumes" Nov 26 08:57:04 crc kubenswrapper[4909]: I1126 08:57:04.308839 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:57:04 crc kubenswrapper[4909]: I1126 08:57:04.309177 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:57:04 crc kubenswrapper[4909]: I1126 08:57:04.376517 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:57:04 crc kubenswrapper[4909]: I1126 08:57:04.627321 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:57:05 crc kubenswrapper[4909]: I1126 08:57:05.356584 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb9ch"] Nov 26 08:57:06 crc kubenswrapper[4909]: I1126 08:57:06.606683 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nb9ch" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="registry-server" containerID="cri-o://e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af" gracePeriod=2 Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.201065 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.301185 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.301284 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.378304 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-utilities\") pod \"113e1875-f6d6-4b80-9ea3-07b0329c311e\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.378441 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-catalog-content\") pod \"113e1875-f6d6-4b80-9ea3-07b0329c311e\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.378780 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46889\" (UniqueName: \"kubernetes.io/projected/113e1875-f6d6-4b80-9ea3-07b0329c311e-kube-api-access-46889\") pod \"113e1875-f6d6-4b80-9ea3-07b0329c311e\" (UID: \"113e1875-f6d6-4b80-9ea3-07b0329c311e\") " Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.380426 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-utilities" (OuterVolumeSpecName: "utilities") pod "113e1875-f6d6-4b80-9ea3-07b0329c311e" (UID: "113e1875-f6d6-4b80-9ea3-07b0329c311e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.391969 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113e1875-f6d6-4b80-9ea3-07b0329c311e-kube-api-access-46889" (OuterVolumeSpecName: "kube-api-access-46889") pod "113e1875-f6d6-4b80-9ea3-07b0329c311e" (UID: "113e1875-f6d6-4b80-9ea3-07b0329c311e"). InnerVolumeSpecName "kube-api-access-46889". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.418357 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "113e1875-f6d6-4b80-9ea3-07b0329c311e" (UID: "113e1875-f6d6-4b80-9ea3-07b0329c311e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.483334 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.483441 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113e1875-f6d6-4b80-9ea3-07b0329c311e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.483471 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46889\" (UniqueName: \"kubernetes.io/projected/113e1875-f6d6-4b80-9ea3-07b0329c311e-kube-api-access-46889\") on node \"crc\" DevicePath \"\"" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.622514 4909 generic.go:334] "Generic (PLEG): container finished" podID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerID="e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af" exitCode=0 Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.622652 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb9ch" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.622651 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb9ch" event={"ID":"113e1875-f6d6-4b80-9ea3-07b0329c311e","Type":"ContainerDied","Data":"e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af"} Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.623310 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb9ch" event={"ID":"113e1875-f6d6-4b80-9ea3-07b0329c311e","Type":"ContainerDied","Data":"9b05f473921068e60bed8e0cd3587f7949f7ef3dce22d742bc931f6b3408d4d2"} Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.623347 4909 scope.go:117] "RemoveContainer" containerID="e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.658654 4909 scope.go:117] "RemoveContainer" containerID="c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.674836 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb9ch"] Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.691706 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb9ch"] Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.711248 4909 scope.go:117] "RemoveContainer" containerID="da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.746815 4909 scope.go:117] "RemoveContainer" containerID="e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af" Nov 26 08:57:07 crc kubenswrapper[4909]: E1126 08:57:07.747339 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af\": container with ID starting with e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af not found: ID does not exist" containerID="e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.747382 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af"} err="failed to get container status \"e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af\": rpc error: code = NotFound desc = could not find container \"e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af\": container with ID starting with e54a325e49fa18c9e675204eeba2844fa6c65dc6824c1ddacded1507878956af not found: ID does not exist" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.747408 4909 scope.go:117] "RemoveContainer" containerID="c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f" Nov 26 08:57:07 crc kubenswrapper[4909]: E1126 08:57:07.747835 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f\": container with ID starting with c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f not found: ID does not exist" containerID="c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.747866 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f"} err="failed to get container status \"c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f\": rpc error: code = NotFound desc = could not find container \"c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f\": container with ID starting with c80cd0e2cff538dbab759494824202ace1e1cd5b98cd0ac6695ef3d39a15422f not found: ID does not exist" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.747884 4909 scope.go:117] "RemoveContainer" containerID="da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f" Nov 26 08:57:07 crc kubenswrapper[4909]: E1126 08:57:07.748468 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f\": container with ID starting with da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f not found: ID does not exist" containerID="da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f" Nov 26 08:57:07 crc kubenswrapper[4909]: I1126 08:57:07.748500 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f"} err="failed to get container status \"da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f\": rpc error: code = NotFound desc = could not find container \"da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f\": container with ID starting with da6cc4e1ec275259dd25180a8ca9e4536c7d85b2d32fdb133a48118732e8859f not found: ID does not exist" Nov 26 08:57:08 crc kubenswrapper[4909]: I1126 08:57:08.517838 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" path="/var/lib/kubelet/pods/113e1875-f6d6-4b80-9ea3-07b0329c311e/volumes" Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.301650 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.302224 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.302291 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.303633 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.303749 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" gracePeriod=600 Nov 26 08:57:37 crc kubenswrapper[4909]: E1126 08:57:37.438127 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.961239 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" exitCode=0 Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.961308 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7"} Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.961434 4909 scope.go:117] "RemoveContainer" containerID="c4a01709a31556cba47cfbc7acf4fffb334a8059c80b7836f817a64be470f2c6" Nov 26 08:57:37 crc kubenswrapper[4909]: I1126 08:57:37.962280 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:57:37 crc kubenswrapper[4909]: E1126 08:57:37.962638 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:57:50 crc kubenswrapper[4909]: I1126 08:57:50.499912 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:57:50 crc kubenswrapper[4909]: E1126 08:57:50.500947 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:58:01 crc kubenswrapper[4909]: I1126 08:58:01.233844 4909 generic.go:334] "Generic (PLEG): container finished" podID="03058c3f-9b59-4c2c-ada7-8291a75dae01" containerID="35dce27f725db138232b57be05dd6932438191061b8b868a21a93c61a1a5016d" exitCode=0 Nov 26 08:58:01 crc kubenswrapper[4909]: I1126 08:58:01.233968 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" event={"ID":"03058c3f-9b59-4c2c-ada7-8291a75dae01","Type":"ContainerDied","Data":"35dce27f725db138232b57be05dd6932438191061b8b868a21a93c61a1a5016d"} Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.755953 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.904194 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-bootstrap-combined-ca-bundle\") pod \"03058c3f-9b59-4c2c-ada7-8291a75dae01\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.904376 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-inventory\") pod \"03058c3f-9b59-4c2c-ada7-8291a75dae01\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.904630 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ceph\") pod \"03058c3f-9b59-4c2c-ada7-8291a75dae01\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.904810 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx95p\" (UniqueName: \"kubernetes.io/projected/03058c3f-9b59-4c2c-ada7-8291a75dae01-kube-api-access-dx95p\") pod \"03058c3f-9b59-4c2c-ada7-8291a75dae01\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.905039 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ssh-key\") pod \"03058c3f-9b59-4c2c-ada7-8291a75dae01\" (UID: \"03058c3f-9b59-4c2c-ada7-8291a75dae01\") " Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.909922 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ceph" (OuterVolumeSpecName: "ceph") pod "03058c3f-9b59-4c2c-ada7-8291a75dae01" (UID: "03058c3f-9b59-4c2c-ada7-8291a75dae01"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.910000 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "03058c3f-9b59-4c2c-ada7-8291a75dae01" (UID: "03058c3f-9b59-4c2c-ada7-8291a75dae01"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.910259 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03058c3f-9b59-4c2c-ada7-8291a75dae01-kube-api-access-dx95p" (OuterVolumeSpecName: "kube-api-access-dx95p") pod "03058c3f-9b59-4c2c-ada7-8291a75dae01" (UID: "03058c3f-9b59-4c2c-ada7-8291a75dae01"). InnerVolumeSpecName "kube-api-access-dx95p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.935263 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-inventory" (OuterVolumeSpecName: "inventory") pod "03058c3f-9b59-4c2c-ada7-8291a75dae01" (UID: "03058c3f-9b59-4c2c-ada7-8291a75dae01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:58:02 crc kubenswrapper[4909]: I1126 08:58:02.958083 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "03058c3f-9b59-4c2c-ada7-8291a75dae01" (UID: "03058c3f-9b59-4c2c-ada7-8291a75dae01"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.009867 4909 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.009910 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.009922 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.009934 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx95p\" (UniqueName: \"kubernetes.io/projected/03058c3f-9b59-4c2c-ada7-8291a75dae01-kube-api-access-dx95p\") on node \"crc\" DevicePath \"\"" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.009946 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03058c3f-9b59-4c2c-ada7-8291a75dae01-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.260878 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" event={"ID":"03058c3f-9b59-4c2c-ada7-8291a75dae01","Type":"ContainerDied","Data":"744f175985a6b280896eff60e2cdc4be6d5cd934d18392a6556cab1fce04e5c2"} Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.261354 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="744f175985a6b280896eff60e2cdc4be6d5cd934d18392a6556cab1fce04e5c2" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.260973 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-dclj2" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.377165 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-dh5c6"] Nov 26 08:58:03 crc kubenswrapper[4909]: E1126 08:58:03.377861 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="registry-server" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.377896 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="registry-server" Nov 26 08:58:03 crc kubenswrapper[4909]: E1126 08:58:03.377914 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="registry-server" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.377925 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="registry-server" Nov 26 08:58:03 crc kubenswrapper[4909]: E1126 08:58:03.377962 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="extract-content" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.377970 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="extract-content" Nov 26 08:58:03 crc kubenswrapper[4909]: E1126 08:58:03.377996 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03058c3f-9b59-4c2c-ada7-8291a75dae01" containerName="bootstrap-openstack-openstack-cell1" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.378004 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="03058c3f-9b59-4c2c-ada7-8291a75dae01" containerName="bootstrap-openstack-openstack-cell1" Nov 26 08:58:03 crc kubenswrapper[4909]: E1126 08:58:03.378020 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="extract-utilities" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.378028 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="extract-utilities" Nov 26 08:58:03 crc kubenswrapper[4909]: E1126 08:58:03.378050 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="extract-utilities" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.378058 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="extract-utilities" Nov 26 08:58:03 crc kubenswrapper[4909]: E1126 08:58:03.378070 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="extract-content" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.378080 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="extract-content" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.378380 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="113e1875-f6d6-4b80-9ea3-07b0329c311e" containerName="registry-server" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.378401 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="03058c3f-9b59-4c2c-ada7-8291a75dae01" containerName="bootstrap-openstack-openstack-cell1" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.378413 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8c9339-1661-4620-977d-eb2ecdc7a976" containerName="registry-server" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.379448 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.381924 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.381963 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.383118 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.383199 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.389705 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-dh5c6"] Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.523111 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ssh-key\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.523208 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-inventory\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.523370 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ceph\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.524204 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5nj5\" (UniqueName: \"kubernetes.io/projected/e0b9fe64-4d4f-46a3-849f-820bdf130897-kube-api-access-s5nj5\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.626409 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5nj5\" (UniqueName: \"kubernetes.io/projected/e0b9fe64-4d4f-46a3-849f-820bdf130897-kube-api-access-s5nj5\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.626723 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ssh-key\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.626779 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-inventory\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.626889 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ceph\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.631886 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ceph\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.632512 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-inventory\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.640480 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ssh-key\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.646671 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5nj5\" (UniqueName: \"kubernetes.io/projected/e0b9fe64-4d4f-46a3-849f-820bdf130897-kube-api-access-s5nj5\") pod \"download-cache-openstack-openstack-cell1-dh5c6\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:03 crc kubenswrapper[4909]: I1126 08:58:03.709636 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:58:04 crc kubenswrapper[4909]: I1126 08:58:04.273331 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-dh5c6"] Nov 26 08:58:04 crc kubenswrapper[4909]: I1126 08:58:04.499506 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:58:04 crc kubenswrapper[4909]: E1126 08:58:04.500067 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:58:05 crc kubenswrapper[4909]: I1126 08:58:05.293366 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" event={"ID":"e0b9fe64-4d4f-46a3-849f-820bdf130897","Type":"ContainerStarted","Data":"997e355c7c93280a4e065f00f852fe99f8b6a48780879e51a1f69307c3e5a82e"} Nov 26 08:58:05 crc kubenswrapper[4909]: I1126 08:58:05.293799 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" event={"ID":"e0b9fe64-4d4f-46a3-849f-820bdf130897","Type":"ContainerStarted","Data":"d97a1d1301ceba03960414aa9d60678bc56dc10fb765f9e1c1030d93f9f8e3b9"} Nov 26 08:58:05 crc kubenswrapper[4909]: I1126 08:58:05.326411 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" podStartSLOduration=1.7378530049999998 podStartE2EDuration="2.326378783s" podCreationTimestamp="2025-11-26 08:58:03 +0000 UTC" firstStartedPulling="2025-11-26 08:58:04.284696433 +0000 UTC m=+7056.430907609" lastFinishedPulling="2025-11-26 08:58:04.873222221 +0000 UTC m=+7057.019433387" observedRunningTime="2025-11-26 08:58:05.312059153 +0000 UTC m=+7057.458270359" watchObservedRunningTime="2025-11-26 08:58:05.326378783 +0000 UTC m=+7057.472589989" Nov 26 08:58:15 crc kubenswrapper[4909]: I1126 08:58:15.498991 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:58:15 crc kubenswrapper[4909]: E1126 08:58:15.499744 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:58:30 crc kubenswrapper[4909]: I1126 08:58:30.499900 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:58:30 crc kubenswrapper[4909]: E1126 08:58:30.500877 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:58:43 crc kubenswrapper[4909]: I1126 08:58:43.500209 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:58:43 crc kubenswrapper[4909]: E1126 08:58:43.501167 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:58:55 crc kubenswrapper[4909]: I1126 08:58:55.499183 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:58:55 crc kubenswrapper[4909]: E1126 08:58:55.499931 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:59:09 crc kubenswrapper[4909]: I1126 08:59:09.498501 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:59:09 crc kubenswrapper[4909]: E1126 08:59:09.499142 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:59:21 crc kubenswrapper[4909]: I1126 08:59:21.499564 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:59:21 crc kubenswrapper[4909]: E1126 08:59:21.500556 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:59:36 crc kubenswrapper[4909]: I1126 08:59:36.280130 4909 generic.go:334] "Generic (PLEG): container finished" podID="e0b9fe64-4d4f-46a3-849f-820bdf130897" containerID="997e355c7c93280a4e065f00f852fe99f8b6a48780879e51a1f69307c3e5a82e" exitCode=0 Nov 26 08:59:36 crc kubenswrapper[4909]: I1126 08:59:36.280224 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" event={"ID":"e0b9fe64-4d4f-46a3-849f-820bdf130897","Type":"ContainerDied","Data":"997e355c7c93280a4e065f00f852fe99f8b6a48780879e51a1f69307c3e5a82e"} Nov 26 08:59:36 crc kubenswrapper[4909]: I1126 08:59:36.500668 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:59:36 crc kubenswrapper[4909]: E1126 08:59:36.501438 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.028239 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.131906 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5nj5\" (UniqueName: \"kubernetes.io/projected/e0b9fe64-4d4f-46a3-849f-820bdf130897-kube-api-access-s5nj5\") pod \"e0b9fe64-4d4f-46a3-849f-820bdf130897\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.131965 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ceph\") pod \"e0b9fe64-4d4f-46a3-849f-820bdf130897\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.132091 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-inventory\") pod \"e0b9fe64-4d4f-46a3-849f-820bdf130897\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.132220 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ssh-key\") pod \"e0b9fe64-4d4f-46a3-849f-820bdf130897\" (UID: \"e0b9fe64-4d4f-46a3-849f-820bdf130897\") " Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.137495 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b9fe64-4d4f-46a3-849f-820bdf130897-kube-api-access-s5nj5" (OuterVolumeSpecName: "kube-api-access-s5nj5") pod "e0b9fe64-4d4f-46a3-849f-820bdf130897" (UID: "e0b9fe64-4d4f-46a3-849f-820bdf130897"). InnerVolumeSpecName "kube-api-access-s5nj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.140391 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ceph" (OuterVolumeSpecName: "ceph") pod "e0b9fe64-4d4f-46a3-849f-820bdf130897" (UID: "e0b9fe64-4d4f-46a3-849f-820bdf130897"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.162209 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0b9fe64-4d4f-46a3-849f-820bdf130897" (UID: "e0b9fe64-4d4f-46a3-849f-820bdf130897"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.164084 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-inventory" (OuterVolumeSpecName: "inventory") pod "e0b9fe64-4d4f-46a3-849f-820bdf130897" (UID: "e0b9fe64-4d4f-46a3-849f-820bdf130897"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.235027 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.235062 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5nj5\" (UniqueName: \"kubernetes.io/projected/e0b9fe64-4d4f-46a3-849f-820bdf130897-kube-api-access-s5nj5\") on node \"crc\" DevicePath \"\"" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.235073 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.235082 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0b9fe64-4d4f-46a3-849f-820bdf130897-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.302051 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" event={"ID":"e0b9fe64-4d4f-46a3-849f-820bdf130897","Type":"ContainerDied","Data":"d97a1d1301ceba03960414aa9d60678bc56dc10fb765f9e1c1030d93f9f8e3b9"} Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.302093 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d97a1d1301ceba03960414aa9d60678bc56dc10fb765f9e1c1030d93f9f8e3b9" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.302100 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dh5c6" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.417523 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-thnkk"] Nov 26 08:59:38 crc kubenswrapper[4909]: E1126 08:59:38.418034 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b9fe64-4d4f-46a3-849f-820bdf130897" containerName="download-cache-openstack-openstack-cell1" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.418052 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b9fe64-4d4f-46a3-849f-820bdf130897" containerName="download-cache-openstack-openstack-cell1" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.418276 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b9fe64-4d4f-46a3-849f-820bdf130897" containerName="download-cache-openstack-openstack-cell1" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.419097 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.423781 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.424047 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.424193 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.424344 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.453132 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-thnkk"] Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.541086 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-inventory\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.541166 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ceph\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.541232 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ssh-key\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.541292 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjl7\" (UniqueName: \"kubernetes.io/projected/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-kube-api-access-znjl7\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.642942 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ssh-key\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.643000 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjl7\" (UniqueName: \"kubernetes.io/projected/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-kube-api-access-znjl7\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.643189 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-inventory\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.643283 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ceph\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.647321 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ceph\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.647610 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ssh-key\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.647630 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-inventory\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.660335 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjl7\" (UniqueName: \"kubernetes.io/projected/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-kube-api-access-znjl7\") pod \"configure-network-openstack-openstack-cell1-thnkk\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:38 crc kubenswrapper[4909]: I1126 08:59:38.753656 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 08:59:39 crc kubenswrapper[4909]: I1126 08:59:39.343171 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-thnkk"] Nov 26 08:59:40 crc kubenswrapper[4909]: I1126 08:59:40.321211 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" event={"ID":"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3","Type":"ContainerStarted","Data":"60b7ee97636d829f5cbc6c3f6e9c1f82fc121c29f4a1a3469250e00cf730ba98"} Nov 26 08:59:40 crc kubenswrapper[4909]: I1126 08:59:40.321521 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" event={"ID":"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3","Type":"ContainerStarted","Data":"c88c649932c00e4fcbd2c7707427e3aaadf3adedf8da27cfe32d0b0b2cb9c370"} Nov 26 08:59:40 crc kubenswrapper[4909]: I1126 08:59:40.338916 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" podStartSLOduration=1.7602517629999999 podStartE2EDuration="2.338897645s" podCreationTimestamp="2025-11-26 08:59:38 +0000 UTC" firstStartedPulling="2025-11-26 08:59:39.346672982 +0000 UTC m=+7151.492884148" lastFinishedPulling="2025-11-26 08:59:39.925318824 +0000 UTC m=+7152.071530030" observedRunningTime="2025-11-26 08:59:40.337690302 +0000 UTC m=+7152.483901468" watchObservedRunningTime="2025-11-26 08:59:40.338897645 +0000 UTC m=+7152.485108811" Nov 26 08:59:47 crc kubenswrapper[4909]: I1126 08:59:47.499188 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:59:47 crc kubenswrapper[4909]: E1126 08:59:47.499907 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 08:59:59 crc kubenswrapper[4909]: I1126 08:59:59.499433 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 08:59:59 crc kubenswrapper[4909]: E1126 08:59:59.500160 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.198026 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh"] Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.199934 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.207230 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh"] Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.208313 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.208883 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.331974 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85617b4e-fe64-46d6-8ca8-7201a5012e8f-secret-volume\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.333202 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85617b4e-fe64-46d6-8ca8-7201a5012e8f-config-volume\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.333555 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7cwl\" (UniqueName: \"kubernetes.io/projected/85617b4e-fe64-46d6-8ca8-7201a5012e8f-kube-api-access-f7cwl\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.436458 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7cwl\" (UniqueName: \"kubernetes.io/projected/85617b4e-fe64-46d6-8ca8-7201a5012e8f-kube-api-access-f7cwl\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.436644 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85617b4e-fe64-46d6-8ca8-7201a5012e8f-secret-volume\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.436700 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85617b4e-fe64-46d6-8ca8-7201a5012e8f-config-volume\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.437786 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85617b4e-fe64-46d6-8ca8-7201a5012e8f-config-volume\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.446181 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85617b4e-fe64-46d6-8ca8-7201a5012e8f-secret-volume\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.455979 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7cwl\" (UniqueName: \"kubernetes.io/projected/85617b4e-fe64-46d6-8ca8-7201a5012e8f-kube-api-access-f7cwl\") pod \"collect-profiles-29402460-drfkh\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.529318 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:00 crc kubenswrapper[4909]: I1126 09:00:00.981974 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh"] Nov 26 09:00:00 crc kubenswrapper[4909]: W1126 09:00:00.986494 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85617b4e_fe64_46d6_8ca8_7201a5012e8f.slice/crio-fa11094c047ad98af3a7c298d8a4f13e7634fbcba50c471b2c3f464f755332d4 WatchSource:0}: Error finding container fa11094c047ad98af3a7c298d8a4f13e7634fbcba50c471b2c3f464f755332d4: Status 404 returned error can't find the container with id fa11094c047ad98af3a7c298d8a4f13e7634fbcba50c471b2c3f464f755332d4 Nov 26 09:00:01 crc kubenswrapper[4909]: I1126 09:00:01.566122 4909 generic.go:334] "Generic (PLEG): container finished" podID="85617b4e-fe64-46d6-8ca8-7201a5012e8f" containerID="1e4ce7f84a4db07e739ee4e1c4046712e24ebf8306f10cdc516f5cce3991c54d" exitCode=0 Nov 26 09:00:01 crc kubenswrapper[4909]: I1126 09:00:01.566298 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" event={"ID":"85617b4e-fe64-46d6-8ca8-7201a5012e8f","Type":"ContainerDied","Data":"1e4ce7f84a4db07e739ee4e1c4046712e24ebf8306f10cdc516f5cce3991c54d"} Nov 26 09:00:01 crc kubenswrapper[4909]: I1126 09:00:01.566655 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" event={"ID":"85617b4e-fe64-46d6-8ca8-7201a5012e8f","Type":"ContainerStarted","Data":"fa11094c047ad98af3a7c298d8a4f13e7634fbcba50c471b2c3f464f755332d4"} Nov 26 09:00:02 crc kubenswrapper[4909]: I1126 09:00:02.953237 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.102637 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7cwl\" (UniqueName: \"kubernetes.io/projected/85617b4e-fe64-46d6-8ca8-7201a5012e8f-kube-api-access-f7cwl\") pod \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.102751 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85617b4e-fe64-46d6-8ca8-7201a5012e8f-config-volume\") pod \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.102823 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85617b4e-fe64-46d6-8ca8-7201a5012e8f-secret-volume\") pod \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\" (UID: \"85617b4e-fe64-46d6-8ca8-7201a5012e8f\") " Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.103401 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85617b4e-fe64-46d6-8ca8-7201a5012e8f-config-volume" (OuterVolumeSpecName: "config-volume") pod "85617b4e-fe64-46d6-8ca8-7201a5012e8f" (UID: "85617b4e-fe64-46d6-8ca8-7201a5012e8f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.108330 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85617b4e-fe64-46d6-8ca8-7201a5012e8f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85617b4e-fe64-46d6-8ca8-7201a5012e8f" (UID: "85617b4e-fe64-46d6-8ca8-7201a5012e8f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.108863 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85617b4e-fe64-46d6-8ca8-7201a5012e8f-kube-api-access-f7cwl" (OuterVolumeSpecName: "kube-api-access-f7cwl") pod "85617b4e-fe64-46d6-8ca8-7201a5012e8f" (UID: "85617b4e-fe64-46d6-8ca8-7201a5012e8f"). InnerVolumeSpecName "kube-api-access-f7cwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.205451 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7cwl\" (UniqueName: \"kubernetes.io/projected/85617b4e-fe64-46d6-8ca8-7201a5012e8f-kube-api-access-f7cwl\") on node \"crc\" DevicePath \"\"" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.205760 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85617b4e-fe64-46d6-8ca8-7201a5012e8f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.205771 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85617b4e-fe64-46d6-8ca8-7201a5012e8f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.594349 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" event={"ID":"85617b4e-fe64-46d6-8ca8-7201a5012e8f","Type":"ContainerDied","Data":"fa11094c047ad98af3a7c298d8a4f13e7634fbcba50c471b2c3f464f755332d4"} Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.594391 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa11094c047ad98af3a7c298d8a4f13e7634fbcba50c471b2c3f464f755332d4" Nov 26 09:00:03 crc kubenswrapper[4909]: I1126 09:00:03.594442 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh" Nov 26 09:00:04 crc kubenswrapper[4909]: I1126 09:00:04.037810 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4"] Nov 26 09:00:04 crc kubenswrapper[4909]: I1126 09:00:04.054136 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402415-7jbd4"] Nov 26 09:00:04 crc kubenswrapper[4909]: I1126 09:00:04.515725 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609aeb0b-9285-419e-986d-5b3bd41468c8" path="/var/lib/kubelet/pods/609aeb0b-9285-419e-986d-5b3bd41468c8/volumes" Nov 26 09:00:10 crc kubenswrapper[4909]: I1126 09:00:10.499530 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:00:10 crc kubenswrapper[4909]: E1126 09:00:10.500277 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:00:23 crc kubenswrapper[4909]: I1126 09:00:23.498827 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:00:23 crc kubenswrapper[4909]: E1126 09:00:23.499803 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:00:34 crc kubenswrapper[4909]: I1126 09:00:34.502970 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:00:34 crc kubenswrapper[4909]: E1126 09:00:34.505168 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:00:46 crc kubenswrapper[4909]: I1126 09:00:46.445356 4909 scope.go:117] "RemoveContainer" containerID="200aeef3a31c5b1e855def9c5cc6bc4d697083bb276c984f430b753f13b116f8" Nov 26 09:00:48 crc kubenswrapper[4909]: I1126 09:00:48.505201 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:00:48 crc kubenswrapper[4909]: E1126 09:00:48.506079 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.638904 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsg9m"] Nov 26 09:00:52 crc kubenswrapper[4909]: E1126 09:00:52.639962 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85617b4e-fe64-46d6-8ca8-7201a5012e8f" containerName="collect-profiles" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.639977 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="85617b4e-fe64-46d6-8ca8-7201a5012e8f" containerName="collect-profiles" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.640282 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="85617b4e-fe64-46d6-8ca8-7201a5012e8f" containerName="collect-profiles" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.643197 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.652257 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsg9m"] Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.746201 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-utilities\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.746247 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkfhc\" (UniqueName: \"kubernetes.io/projected/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-kube-api-access-hkfhc\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.746430 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-catalog-content\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.849929 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-catalog-content\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.850246 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-utilities\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.850300 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkfhc\" (UniqueName: \"kubernetes.io/projected/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-kube-api-access-hkfhc\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.850422 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-catalog-content\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.850691 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-utilities\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.869315 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkfhc\" (UniqueName: \"kubernetes.io/projected/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-kube-api-access-hkfhc\") pod \"certified-operators-qsg9m\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:52 crc kubenswrapper[4909]: I1126 09:00:52.975002 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:00:53 crc kubenswrapper[4909]: I1126 09:00:53.469478 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsg9m"] Nov 26 09:00:54 crc kubenswrapper[4909]: I1126 09:00:54.154539 4909 generic.go:334] "Generic (PLEG): container finished" podID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerID="8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e" exitCode=0 Nov 26 09:00:54 crc kubenswrapper[4909]: I1126 09:00:54.154666 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsg9m" event={"ID":"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150","Type":"ContainerDied","Data":"8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e"} Nov 26 09:00:54 crc kubenswrapper[4909]: I1126 09:00:54.154977 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsg9m" event={"ID":"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150","Type":"ContainerStarted","Data":"1aa6ed83e0c21bc438c32fd703fca86900decf01d458196a34273ffb51fc64a1"} Nov 26 09:00:54 crc kubenswrapper[4909]: I1126 09:00:54.157145 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:00:56 crc kubenswrapper[4909]: I1126 09:00:56.199280 4909 generic.go:334] "Generic (PLEG): container finished" podID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerID="58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d" exitCode=0 Nov 26 09:00:56 crc kubenswrapper[4909]: I1126 09:00:56.199359 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsg9m" event={"ID":"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150","Type":"ContainerDied","Data":"58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d"} Nov 26 09:00:57 crc kubenswrapper[4909]: I1126 09:00:57.213814 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsg9m" event={"ID":"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150","Type":"ContainerStarted","Data":"31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00"} Nov 26 09:00:57 crc kubenswrapper[4909]: I1126 09:00:57.240111 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsg9m" podStartSLOduration=2.6217344430000002 podStartE2EDuration="5.240082149s" podCreationTimestamp="2025-11-26 09:00:52 +0000 UTC" firstStartedPulling="2025-11-26 09:00:54.156945815 +0000 UTC m=+7226.303156981" lastFinishedPulling="2025-11-26 09:00:56.775293521 +0000 UTC m=+7228.921504687" observedRunningTime="2025-11-26 09:00:57.232859172 +0000 UTC m=+7229.379070348" watchObservedRunningTime="2025-11-26 09:00:57.240082149 +0000 UTC m=+7229.386293355" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.156864 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29402461-nkbfr"] Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.160285 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.193807 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29402461-nkbfr"] Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.222395 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-config-data\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.222434 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-combined-ca-bundle\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.222481 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77jqp\" (UniqueName: \"kubernetes.io/projected/c025b17f-fdf8-4946-b88b-b33958ad8d0f-kube-api-access-77jqp\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.222633 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-fernet-keys\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.325479 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-fernet-keys\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.326018 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-config-data\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.326079 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-combined-ca-bundle\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.326173 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77jqp\" (UniqueName: \"kubernetes.io/projected/c025b17f-fdf8-4946-b88b-b33958ad8d0f-kube-api-access-77jqp\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.333520 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-fernet-keys\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.333520 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-combined-ca-bundle\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.335377 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-config-data\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.350632 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77jqp\" (UniqueName: \"kubernetes.io/projected/c025b17f-fdf8-4946-b88b-b33958ad8d0f-kube-api-access-77jqp\") pod \"keystone-cron-29402461-nkbfr\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.484876 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:00 crc kubenswrapper[4909]: I1126 09:01:00.944934 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29402461-nkbfr"] Nov 26 09:01:00 crc kubenswrapper[4909]: W1126 09:01:00.947882 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc025b17f_fdf8_4946_b88b_b33958ad8d0f.slice/crio-b7b5a1459f90393a903f6f846c91f60172227e89c6a1793fcb4336dabefdb65b WatchSource:0}: Error finding container b7b5a1459f90393a903f6f846c91f60172227e89c6a1793fcb4336dabefdb65b: Status 404 returned error can't find the container with id b7b5a1459f90393a903f6f846c91f60172227e89c6a1793fcb4336dabefdb65b Nov 26 09:01:01 crc kubenswrapper[4909]: I1126 09:01:01.263888 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402461-nkbfr" event={"ID":"c025b17f-fdf8-4946-b88b-b33958ad8d0f","Type":"ContainerStarted","Data":"2d8956a4a594de7c501ebe1b25289daff45e78f9449f2e804d966c40c5165b23"} Nov 26 09:01:01 crc kubenswrapper[4909]: I1126 09:01:01.264289 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402461-nkbfr" event={"ID":"c025b17f-fdf8-4946-b88b-b33958ad8d0f","Type":"ContainerStarted","Data":"b7b5a1459f90393a903f6f846c91f60172227e89c6a1793fcb4336dabefdb65b"} Nov 26 09:01:01 crc kubenswrapper[4909]: I1126 09:01:01.288128 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29402461-nkbfr" podStartSLOduration=1.2881108000000001 podStartE2EDuration="1.2881108s" podCreationTimestamp="2025-11-26 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 09:01:01.278829667 +0000 UTC m=+7233.425040833" watchObservedRunningTime="2025-11-26 09:01:01.2881108 +0000 UTC m=+7233.434321966" Nov 26 09:01:02 crc kubenswrapper[4909]: I1126 09:01:02.499274 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:01:02 crc kubenswrapper[4909]: E1126 09:01:02.499555 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:01:02 crc kubenswrapper[4909]: I1126 09:01:02.975578 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:01:02 crc kubenswrapper[4909]: I1126 09:01:02.975914 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:01:03 crc kubenswrapper[4909]: I1126 09:01:03.063485 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:01:03 crc kubenswrapper[4909]: I1126 09:01:03.351348 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:01:03 crc kubenswrapper[4909]: I1126 09:01:03.401773 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsg9m"] Nov 26 09:01:05 crc kubenswrapper[4909]: I1126 09:01:05.302126 4909 generic.go:334] "Generic (PLEG): container finished" podID="c025b17f-fdf8-4946-b88b-b33958ad8d0f" containerID="2d8956a4a594de7c501ebe1b25289daff45e78f9449f2e804d966c40c5165b23" exitCode=0 Nov 26 09:01:05 crc kubenswrapper[4909]: I1126 09:01:05.302184 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402461-nkbfr" event={"ID":"c025b17f-fdf8-4946-b88b-b33958ad8d0f","Type":"ContainerDied","Data":"2d8956a4a594de7c501ebe1b25289daff45e78f9449f2e804d966c40c5165b23"} Nov 26 09:01:05 crc kubenswrapper[4909]: I1126 09:01:05.302837 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qsg9m" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="registry-server" containerID="cri-o://31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00" gracePeriod=2 Nov 26 09:01:05 crc kubenswrapper[4909]: I1126 09:01:05.980956 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.166884 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-utilities\") pod \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.167098 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-catalog-content\") pod \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.167151 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkfhc\" (UniqueName: \"kubernetes.io/projected/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-kube-api-access-hkfhc\") pod \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\" (UID: \"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150\") " Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.167741 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-utilities" (OuterVolumeSpecName: "utilities") pod "bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" (UID: "bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.167970 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.173142 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-kube-api-access-hkfhc" (OuterVolumeSpecName: "kube-api-access-hkfhc") pod "bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" (UID: "bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150"). InnerVolumeSpecName "kube-api-access-hkfhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.213375 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" (UID: "bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.270419 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.270664 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkfhc\" (UniqueName: \"kubernetes.io/projected/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150-kube-api-access-hkfhc\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.314013 4909 generic.go:334] "Generic (PLEG): container finished" podID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerID="31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00" exitCode=0 Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.314105 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsg9m" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.314157 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsg9m" event={"ID":"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150","Type":"ContainerDied","Data":"31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00"} Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.314259 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsg9m" event={"ID":"bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150","Type":"ContainerDied","Data":"1aa6ed83e0c21bc438c32fd703fca86900decf01d458196a34273ffb51fc64a1"} Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.314308 4909 scope.go:117] "RemoveContainer" containerID="31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.354322 4909 scope.go:117] "RemoveContainer" containerID="58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.391662 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsg9m"] Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.395575 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qsg9m"] Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.410903 4909 scope.go:117] "RemoveContainer" containerID="8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.486227 4909 scope.go:117] "RemoveContainer" containerID="31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00" Nov 26 09:01:06 crc kubenswrapper[4909]: E1126 09:01:06.486926 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00\": container with ID starting with 31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00 not found: ID does not exist" containerID="31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.486969 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00"} err="failed to get container status \"31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00\": rpc error: code = NotFound desc = could not find container \"31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00\": container with ID starting with 31c24b4fbfcc298d8327cc7cdb7720f131af38b4624f1b48674a9c7539faeb00 not found: ID does not exist" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.486995 4909 scope.go:117] "RemoveContainer" containerID="58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d" Nov 26 09:01:06 crc kubenswrapper[4909]: E1126 09:01:06.487557 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d\": container with ID starting with 58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d not found: ID does not exist" containerID="58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.487628 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d"} err="failed to get container status \"58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d\": rpc error: code = NotFound desc = could not find container \"58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d\": container with ID starting with 58bd7218627da33ed46df5d4b34a0dd57ec9e14374b95b525dc0529b3d17d99d not found: ID does not exist" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.487667 4909 scope.go:117] "RemoveContainer" containerID="8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e" Nov 26 09:01:06 crc kubenswrapper[4909]: E1126 09:01:06.488055 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e\": container with ID starting with 8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e not found: ID does not exist" containerID="8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.488108 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e"} err="failed to get container status \"8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e\": rpc error: code = NotFound desc = could not find container \"8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e\": container with ID starting with 8ebc8bf0bf568c9a37fafd2d3e822a818d41152a07fcf43a9195397897e15a9e not found: ID does not exist" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.529919 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" path="/var/lib/kubelet/pods/bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150/volumes" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.818900 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.990334 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-config-data\") pod \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.990669 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-fernet-keys\") pod \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.990711 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77jqp\" (UniqueName: \"kubernetes.io/projected/c025b17f-fdf8-4946-b88b-b33958ad8d0f-kube-api-access-77jqp\") pod \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.990849 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-combined-ca-bundle\") pod \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\" (UID: \"c025b17f-fdf8-4946-b88b-b33958ad8d0f\") " Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.995640 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c025b17f-fdf8-4946-b88b-b33958ad8d0f" (UID: "c025b17f-fdf8-4946-b88b-b33958ad8d0f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:06 crc kubenswrapper[4909]: I1126 09:01:06.995660 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c025b17f-fdf8-4946-b88b-b33958ad8d0f-kube-api-access-77jqp" (OuterVolumeSpecName: "kube-api-access-77jqp") pod "c025b17f-fdf8-4946-b88b-b33958ad8d0f" (UID: "c025b17f-fdf8-4946-b88b-b33958ad8d0f"). InnerVolumeSpecName "kube-api-access-77jqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.023952 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c025b17f-fdf8-4946-b88b-b33958ad8d0f" (UID: "c025b17f-fdf8-4946-b88b-b33958ad8d0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.049731 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-config-data" (OuterVolumeSpecName: "config-data") pod "c025b17f-fdf8-4946-b88b-b33958ad8d0f" (UID: "c025b17f-fdf8-4946-b88b-b33958ad8d0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.093943 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.093973 4909 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.093983 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77jqp\" (UniqueName: \"kubernetes.io/projected/c025b17f-fdf8-4946-b88b-b33958ad8d0f-kube-api-access-77jqp\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.093993 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c025b17f-fdf8-4946-b88b-b33958ad8d0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.323652 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402461-nkbfr" event={"ID":"c025b17f-fdf8-4946-b88b-b33958ad8d0f","Type":"ContainerDied","Data":"b7b5a1459f90393a903f6f846c91f60172227e89c6a1793fcb4336dabefdb65b"} Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.323699 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7b5a1459f90393a903f6f846c91f60172227e89c6a1793fcb4336dabefdb65b" Nov 26 09:01:07 crc kubenswrapper[4909]: I1126 09:01:07.323703 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402461-nkbfr" Nov 26 09:01:13 crc kubenswrapper[4909]: I1126 09:01:13.500390 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:01:13 crc kubenswrapper[4909]: E1126 09:01:13.501197 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:01:17 crc kubenswrapper[4909]: I1126 09:01:17.425663 4909 generic.go:334] "Generic (PLEG): container finished" podID="6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" containerID="60b7ee97636d829f5cbc6c3f6e9c1f82fc121c29f4a1a3469250e00cf730ba98" exitCode=0 Nov 26 09:01:17 crc kubenswrapper[4909]: I1126 09:01:17.426139 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" event={"ID":"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3","Type":"ContainerDied","Data":"60b7ee97636d829f5cbc6c3f6e9c1f82fc121c29f4a1a3469250e00cf730ba98"} Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.892277 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.956357 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ssh-key\") pod \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.956500 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ceph\") pod \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.956531 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znjl7\" (UniqueName: \"kubernetes.io/projected/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-kube-api-access-znjl7\") pod \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.956943 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-inventory\") pod \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\" (UID: \"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3\") " Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.962667 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ceph" (OuterVolumeSpecName: "ceph") pod "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" (UID: "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.962667 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-kube-api-access-znjl7" (OuterVolumeSpecName: "kube-api-access-znjl7") pod "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" (UID: "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3"). InnerVolumeSpecName "kube-api-access-znjl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:01:18 crc kubenswrapper[4909]: I1126 09:01:18.988676 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-inventory" (OuterVolumeSpecName: "inventory") pod "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" (UID: "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.002894 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" (UID: "6297fa9c-fc6c-4b1d-ab62-62e3f52004c3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.059496 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.059533 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.059541 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.059552 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znjl7\" (UniqueName: \"kubernetes.io/projected/6297fa9c-fc6c-4b1d-ab62-62e3f52004c3-kube-api-access-znjl7\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.445720 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" event={"ID":"6297fa9c-fc6c-4b1d-ab62-62e3f52004c3","Type":"ContainerDied","Data":"c88c649932c00e4fcbd2c7707427e3aaadf3adedf8da27cfe32d0b0b2cb9c370"} Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.445757 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88c649932c00e4fcbd2c7707427e3aaadf3adedf8da27cfe32d0b0b2cb9c370" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.445802 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-thnkk" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.537566 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-mj6p4"] Nov 26 09:01:19 crc kubenswrapper[4909]: E1126 09:01:19.537983 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c025b17f-fdf8-4946-b88b-b33958ad8d0f" containerName="keystone-cron" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.537998 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c025b17f-fdf8-4946-b88b-b33958ad8d0f" containerName="keystone-cron" Nov 26 09:01:19 crc kubenswrapper[4909]: E1126 09:01:19.538039 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="extract-content" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.538045 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="extract-content" Nov 26 09:01:19 crc kubenswrapper[4909]: E1126 09:01:19.538070 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="extract-utilities" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.538076 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="extract-utilities" Nov 26 09:01:19 crc kubenswrapper[4909]: E1126 09:01:19.538087 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="registry-server" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.538092 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="registry-server" Nov 26 09:01:19 crc kubenswrapper[4909]: E1126 09:01:19.538107 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" containerName="configure-network-openstack-openstack-cell1" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.538114 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" containerName="configure-network-openstack-openstack-cell1" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.538295 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6297fa9c-fc6c-4b1d-ab62-62e3f52004c3" containerName="configure-network-openstack-openstack-cell1" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.538319 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5bcd01-7ca2-4724-b84a-d9fa4b9cb150" containerName="registry-server" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.538342 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c025b17f-fdf8-4946-b88b-b33958ad8d0f" containerName="keystone-cron" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.539041 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.543111 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.543384 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.543512 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.549321 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-mj6p4"] Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.551969 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.682060 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ceph\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.682250 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djgjn\" (UniqueName: \"kubernetes.io/projected/01317a74-5f88-42f3-bafe-bdaa599dc2f2-kube-api-access-djgjn\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.682361 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ssh-key\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.683493 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-inventory\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.784634 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djgjn\" (UniqueName: \"kubernetes.io/projected/01317a74-5f88-42f3-bafe-bdaa599dc2f2-kube-api-access-djgjn\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.784741 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ssh-key\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.784818 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-inventory\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.784872 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ceph\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.791611 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-inventory\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.805156 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ssh-key\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.811989 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djgjn\" (UniqueName: \"kubernetes.io/projected/01317a74-5f88-42f3-bafe-bdaa599dc2f2-kube-api-access-djgjn\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.823333 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ceph\") pod \"validate-network-openstack-openstack-cell1-mj6p4\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:19 crc kubenswrapper[4909]: I1126 09:01:19.868085 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:20 crc kubenswrapper[4909]: I1126 09:01:20.518944 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-mj6p4"] Nov 26 09:01:21 crc kubenswrapper[4909]: I1126 09:01:21.470299 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" event={"ID":"01317a74-5f88-42f3-bafe-bdaa599dc2f2","Type":"ContainerStarted","Data":"a8c64e72d478d215fb55cafb47f0ab2995dc192ec36279b3ec4f6ab8332aea5c"} Nov 26 09:01:21 crc kubenswrapper[4909]: I1126 09:01:21.470594 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" event={"ID":"01317a74-5f88-42f3-bafe-bdaa599dc2f2","Type":"ContainerStarted","Data":"f2b93d536c9f261072c45574f03b0ba4a1601b8ed197955835581f5c7d73ac47"} Nov 26 09:01:21 crc kubenswrapper[4909]: I1126 09:01:21.493956 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" podStartSLOduration=1.8507043570000001 podStartE2EDuration="2.49390721s" podCreationTimestamp="2025-11-26 09:01:19 +0000 UTC" firstStartedPulling="2025-11-26 09:01:20.4845562 +0000 UTC m=+7252.630767366" lastFinishedPulling="2025-11-26 09:01:21.127759043 +0000 UTC m=+7253.273970219" observedRunningTime="2025-11-26 09:01:21.489582812 +0000 UTC m=+7253.635793968" watchObservedRunningTime="2025-11-26 09:01:21.49390721 +0000 UTC m=+7253.640118396" Nov 26 09:01:26 crc kubenswrapper[4909]: I1126 09:01:26.517894 4909 generic.go:334] "Generic (PLEG): container finished" podID="01317a74-5f88-42f3-bafe-bdaa599dc2f2" containerID="a8c64e72d478d215fb55cafb47f0ab2995dc192ec36279b3ec4f6ab8332aea5c" exitCode=0 Nov 26 09:01:26 crc kubenswrapper[4909]: I1126 09:01:26.517956 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" event={"ID":"01317a74-5f88-42f3-bafe-bdaa599dc2f2","Type":"ContainerDied","Data":"a8c64e72d478d215fb55cafb47f0ab2995dc192ec36279b3ec4f6ab8332aea5c"} Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.499650 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:01:27 crc kubenswrapper[4909]: E1126 09:01:27.500332 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.970360 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.983123 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ceph\") pod \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.983246 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-inventory\") pod \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.983303 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ssh-key\") pod \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.986575 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djgjn\" (UniqueName: \"kubernetes.io/projected/01317a74-5f88-42f3-bafe-bdaa599dc2f2-kube-api-access-djgjn\") pod \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\" (UID: \"01317a74-5f88-42f3-bafe-bdaa599dc2f2\") " Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.991393 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ceph" (OuterVolumeSpecName: "ceph") pod "01317a74-5f88-42f3-bafe-bdaa599dc2f2" (UID: "01317a74-5f88-42f3-bafe-bdaa599dc2f2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:27 crc kubenswrapper[4909]: I1126 09:01:27.998690 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01317a74-5f88-42f3-bafe-bdaa599dc2f2-kube-api-access-djgjn" (OuterVolumeSpecName: "kube-api-access-djgjn") pod "01317a74-5f88-42f3-bafe-bdaa599dc2f2" (UID: "01317a74-5f88-42f3-bafe-bdaa599dc2f2"). InnerVolumeSpecName "kube-api-access-djgjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.022773 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01317a74-5f88-42f3-bafe-bdaa599dc2f2" (UID: "01317a74-5f88-42f3-bafe-bdaa599dc2f2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.050036 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-inventory" (OuterVolumeSpecName: "inventory") pod "01317a74-5f88-42f3-bafe-bdaa599dc2f2" (UID: "01317a74-5f88-42f3-bafe-bdaa599dc2f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.089583 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djgjn\" (UniqueName: \"kubernetes.io/projected/01317a74-5f88-42f3-bafe-bdaa599dc2f2-kube-api-access-djgjn\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.089648 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.089660 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.089671 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01317a74-5f88-42f3-bafe-bdaa599dc2f2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.537332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" event={"ID":"01317a74-5f88-42f3-bafe-bdaa599dc2f2","Type":"ContainerDied","Data":"f2b93d536c9f261072c45574f03b0ba4a1601b8ed197955835581f5c7d73ac47"} Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.537741 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b93d536c9f261072c45574f03b0ba4a1601b8ed197955835581f5c7d73ac47" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.537803 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-mj6p4" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.632846 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-znphg"] Nov 26 09:01:28 crc kubenswrapper[4909]: E1126 09:01:28.633345 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01317a74-5f88-42f3-bafe-bdaa599dc2f2" containerName="validate-network-openstack-openstack-cell1" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.633368 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="01317a74-5f88-42f3-bafe-bdaa599dc2f2" containerName="validate-network-openstack-openstack-cell1" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.633622 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="01317a74-5f88-42f3-bafe-bdaa599dc2f2" containerName="validate-network-openstack-openstack-cell1" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.634452 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.637727 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.637931 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.638083 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.638257 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.660548 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-znphg"] Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.714651 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-inventory\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.714731 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ceph\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.714764 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ssh-key\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.714929 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nkz\" (UniqueName: \"kubernetes.io/projected/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-kube-api-access-27nkz\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.816991 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nkz\" (UniqueName: \"kubernetes.io/projected/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-kube-api-access-27nkz\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.817138 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-inventory\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.817216 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ceph\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.817258 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ssh-key\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.822958 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-inventory\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.824385 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ceph\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.827816 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ssh-key\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.837117 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nkz\" (UniqueName: \"kubernetes.io/projected/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-kube-api-access-27nkz\") pod \"install-os-openstack-openstack-cell1-znphg\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:28 crc kubenswrapper[4909]: I1126 09:01:28.963243 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:01:29 crc kubenswrapper[4909]: I1126 09:01:29.531829 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-znphg"] Nov 26 09:01:29 crc kubenswrapper[4909]: I1126 09:01:29.548438 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-znphg" event={"ID":"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d","Type":"ContainerStarted","Data":"4b95a9ed15fb877dc1df54bf26f4fa0633731c8f4c19b07d781001b97a56b9a5"} Nov 26 09:01:30 crc kubenswrapper[4909]: I1126 09:01:30.558882 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-znphg" event={"ID":"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d","Type":"ContainerStarted","Data":"8af8ad2571dddac92d0b4e190ee98e2260296bd1c826b707847767b54f540e4f"} Nov 26 09:01:30 crc kubenswrapper[4909]: I1126 09:01:30.576701 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-znphg" podStartSLOduration=1.986206429 podStartE2EDuration="2.576681385s" podCreationTimestamp="2025-11-26 09:01:28 +0000 UTC" firstStartedPulling="2025-11-26 09:01:29.522815091 +0000 UTC m=+7261.669026257" lastFinishedPulling="2025-11-26 09:01:30.113290027 +0000 UTC m=+7262.259501213" observedRunningTime="2025-11-26 09:01:30.572763519 +0000 UTC m=+7262.718974695" watchObservedRunningTime="2025-11-26 09:01:30.576681385 +0000 UTC m=+7262.722892551" Nov 26 09:01:38 crc kubenswrapper[4909]: I1126 09:01:38.506908 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:01:38 crc kubenswrapper[4909]: E1126 09:01:38.507980 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:01:49 crc kubenswrapper[4909]: I1126 09:01:49.499616 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:01:49 crc kubenswrapper[4909]: E1126 09:01:49.500432 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:02:00 crc kubenswrapper[4909]: I1126 09:02:00.499204 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:02:00 crc kubenswrapper[4909]: E1126 09:02:00.500171 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:02:15 crc kubenswrapper[4909]: I1126 09:02:15.499186 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:02:15 crc kubenswrapper[4909]: E1126 09:02:15.500080 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:02:17 crc kubenswrapper[4909]: I1126 09:02:17.087176 4909 generic.go:334] "Generic (PLEG): container finished" podID="aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" containerID="8af8ad2571dddac92d0b4e190ee98e2260296bd1c826b707847767b54f540e4f" exitCode=0 Nov 26 09:02:17 crc kubenswrapper[4909]: I1126 09:02:17.087273 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-znphg" event={"ID":"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d","Type":"ContainerDied","Data":"8af8ad2571dddac92d0b4e190ee98e2260296bd1c826b707847767b54f540e4f"} Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.567767 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.764425 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27nkz\" (UniqueName: \"kubernetes.io/projected/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-kube-api-access-27nkz\") pod \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.764484 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ceph\") pod \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.764608 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-inventory\") pod \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.764638 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ssh-key\") pod \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\" (UID: \"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d\") " Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.775967 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ceph" (OuterVolumeSpecName: "ceph") pod "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" (UID: "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.776008 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-kube-api-access-27nkz" (OuterVolumeSpecName: "kube-api-access-27nkz") pod "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" (UID: "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d"). InnerVolumeSpecName "kube-api-access-27nkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.804133 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" (UID: "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.804768 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-inventory" (OuterVolumeSpecName: "inventory") pod "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" (UID: "aaf6fcf3-bb6b-4c6a-9a85-91885140e70d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.867812 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.867861 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.867879 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27nkz\" (UniqueName: \"kubernetes.io/projected/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-kube-api-access-27nkz\") on node \"crc\" DevicePath \"\"" Nov 26 09:02:18 crc kubenswrapper[4909]: I1126 09:02:18.867900 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aaf6fcf3-bb6b-4c6a-9a85-91885140e70d-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.115231 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-znphg" event={"ID":"aaf6fcf3-bb6b-4c6a-9a85-91885140e70d","Type":"ContainerDied","Data":"4b95a9ed15fb877dc1df54bf26f4fa0633731c8f4c19b07d781001b97a56b9a5"} Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.115308 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-znphg" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.115315 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b95a9ed15fb877dc1df54bf26f4fa0633731c8f4c19b07d781001b97a56b9a5" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.243571 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-kvf2x"] Nov 26 09:02:19 crc kubenswrapper[4909]: E1126 09:02:19.244120 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" containerName="install-os-openstack-openstack-cell1" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.244140 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" containerName="install-os-openstack-openstack-cell1" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.244475 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf6fcf3-bb6b-4c6a-9a85-91885140e70d" containerName="install-os-openstack-openstack-cell1" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.245575 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.249559 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.249582 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.249936 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.251479 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.261014 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-kvf2x"] Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.275377 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwpjw\" (UniqueName: \"kubernetes.io/projected/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-kube-api-access-mwpjw\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.275452 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ceph\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.275518 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ssh-key\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.275575 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-inventory\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.377852 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ceph\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.377987 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ssh-key\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.378070 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-inventory\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.378158 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwpjw\" (UniqueName: \"kubernetes.io/projected/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-kube-api-access-mwpjw\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.401297 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-inventory\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.413016 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwpjw\" (UniqueName: \"kubernetes.io/projected/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-kube-api-access-mwpjw\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.424246 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ceph\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.430235 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ssh-key\") pod \"configure-os-openstack-openstack-cell1-kvf2x\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:19 crc kubenswrapper[4909]: I1126 09:02:19.612759 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:02:20 crc kubenswrapper[4909]: I1126 09:02:20.210358 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-kvf2x"] Nov 26 09:02:20 crc kubenswrapper[4909]: W1126 09:02:20.212027 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eb1dd46_2b50_4cee_b40e_0499b60dd32c.slice/crio-aafc63752bee0ccc48360e1f95a055ef31f425c3a4d03d6daa7e8165634cb680 WatchSource:0}: Error finding container aafc63752bee0ccc48360e1f95a055ef31f425c3a4d03d6daa7e8165634cb680: Status 404 returned error can't find the container with id aafc63752bee0ccc48360e1f95a055ef31f425c3a4d03d6daa7e8165634cb680 Nov 26 09:02:21 crc kubenswrapper[4909]: I1126 09:02:21.138937 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" event={"ID":"4eb1dd46-2b50-4cee-b40e-0499b60dd32c","Type":"ContainerStarted","Data":"aafc63752bee0ccc48360e1f95a055ef31f425c3a4d03d6daa7e8165634cb680"} Nov 26 09:02:22 crc kubenswrapper[4909]: I1126 09:02:22.150432 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" event={"ID":"4eb1dd46-2b50-4cee-b40e-0499b60dd32c","Type":"ContainerStarted","Data":"b7470e8f612ec5deac98c667835116221a980ed625aff77b830043150dbd2eda"} Nov 26 09:02:22 crc kubenswrapper[4909]: I1126 09:02:22.169912 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" podStartSLOduration=2.139674528 podStartE2EDuration="3.169875766s" podCreationTimestamp="2025-11-26 09:02:19 +0000 UTC" firstStartedPulling="2025-11-26 09:02:20.215609974 +0000 UTC m=+7312.361821140" lastFinishedPulling="2025-11-26 09:02:21.245811212 +0000 UTC m=+7313.392022378" observedRunningTime="2025-11-26 09:02:22.168275893 +0000 UTC m=+7314.314487089" watchObservedRunningTime="2025-11-26 09:02:22.169875766 +0000 UTC m=+7314.316086932" Nov 26 09:02:28 crc kubenswrapper[4909]: I1126 09:02:28.508742 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:02:28 crc kubenswrapper[4909]: E1126 09:02:28.511108 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:02:42 crc kubenswrapper[4909]: I1126 09:02:42.499929 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:02:43 crc kubenswrapper[4909]: I1126 09:02:43.438910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"9485f7f07079fe5373a349cfe3c947df8d42987ac57bf67685a11bc4c748e707"} Nov 26 09:03:05 crc kubenswrapper[4909]: I1126 09:03:05.677212 4909 generic.go:334] "Generic (PLEG): container finished" podID="4eb1dd46-2b50-4cee-b40e-0499b60dd32c" containerID="b7470e8f612ec5deac98c667835116221a980ed625aff77b830043150dbd2eda" exitCode=0 Nov 26 09:03:05 crc kubenswrapper[4909]: I1126 09:03:05.677275 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" event={"ID":"4eb1dd46-2b50-4cee-b40e-0499b60dd32c","Type":"ContainerDied","Data":"b7470e8f612ec5deac98c667835116221a980ed625aff77b830043150dbd2eda"} Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.011328 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jgd6m"] Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.014887 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.028746 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgd6m"] Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.209548 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-utilities\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.209629 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9xv\" (UniqueName: \"kubernetes.io/projected/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-kube-api-access-xb9xv\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.209693 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-catalog-content\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.311968 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-catalog-content\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.312205 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-utilities\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.312242 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9xv\" (UniqueName: \"kubernetes.io/projected/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-kube-api-access-xb9xv\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.312643 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-catalog-content\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.312718 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-utilities\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.334486 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9xv\" (UniqueName: \"kubernetes.io/projected/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-kube-api-access-xb9xv\") pod \"redhat-operators-jgd6m\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.337716 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:06 crc kubenswrapper[4909]: I1126 09:03:06.867592 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgd6m"] Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.238631 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.241297 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-inventory\") pod \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.241414 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ceph\") pod \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.241493 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ssh-key\") pod \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.241659 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwpjw\" (UniqueName: \"kubernetes.io/projected/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-kube-api-access-mwpjw\") pod \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\" (UID: \"4eb1dd46-2b50-4cee-b40e-0499b60dd32c\") " Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.254553 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-kube-api-access-mwpjw" (OuterVolumeSpecName: "kube-api-access-mwpjw") pod "4eb1dd46-2b50-4cee-b40e-0499b60dd32c" (UID: "4eb1dd46-2b50-4cee-b40e-0499b60dd32c"). InnerVolumeSpecName "kube-api-access-mwpjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.256441 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ceph" (OuterVolumeSpecName: "ceph") pod "4eb1dd46-2b50-4cee-b40e-0499b60dd32c" (UID: "4eb1dd46-2b50-4cee-b40e-0499b60dd32c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.282566 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-inventory" (OuterVolumeSpecName: "inventory") pod "4eb1dd46-2b50-4cee-b40e-0499b60dd32c" (UID: "4eb1dd46-2b50-4cee-b40e-0499b60dd32c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.297939 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4eb1dd46-2b50-4cee-b40e-0499b60dd32c" (UID: "4eb1dd46-2b50-4cee-b40e-0499b60dd32c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.348085 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.348115 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.348124 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.348132 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwpjw\" (UniqueName: \"kubernetes.io/projected/4eb1dd46-2b50-4cee-b40e-0499b60dd32c-kube-api-access-mwpjw\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.700345 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" event={"ID":"4eb1dd46-2b50-4cee-b40e-0499b60dd32c","Type":"ContainerDied","Data":"aafc63752bee0ccc48360e1f95a055ef31f425c3a4d03d6daa7e8165634cb680"} Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.700402 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafc63752bee0ccc48360e1f95a055ef31f425c3a4d03d6daa7e8165634cb680" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.700355 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kvf2x" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.702523 4909 generic.go:334] "Generic (PLEG): container finished" podID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerID="2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1" exitCode=0 Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.702574 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgd6m" event={"ID":"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb","Type":"ContainerDied","Data":"2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1"} Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.702629 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgd6m" event={"ID":"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb","Type":"ContainerStarted","Data":"2b43ea75467c8043e52f36dbfa6b82d5387299703e329c5afb1dccb4603e9837"} Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.801728 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-98vzd"] Nov 26 09:03:07 crc kubenswrapper[4909]: E1126 09:03:07.802291 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb1dd46-2b50-4cee-b40e-0499b60dd32c" containerName="configure-os-openstack-openstack-cell1" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.802313 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb1dd46-2b50-4cee-b40e-0499b60dd32c" containerName="configure-os-openstack-openstack-cell1" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.802614 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb1dd46-2b50-4cee-b40e-0499b60dd32c" containerName="configure-os-openstack-openstack-cell1" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.804769 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.808158 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.808387 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.808557 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.808726 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.815963 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-98vzd"] Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.958622 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-inventory-0\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.958698 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ceph\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.959155 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:07 crc kubenswrapper[4909]: I1126 09:03:07.959511 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjls\" (UniqueName: \"kubernetes.io/projected/1ede74bc-82e7-45ee-9592-663a43097439-kube-api-access-5cjls\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.061629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjls\" (UniqueName: \"kubernetes.io/projected/1ede74bc-82e7-45ee-9592-663a43097439-kube-api-access-5cjls\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.061687 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-inventory-0\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.061723 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ceph\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.061821 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.069461 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-inventory-0\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.072990 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ceph\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.081789 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.087019 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjls\" (UniqueName: \"kubernetes.io/projected/1ede74bc-82e7-45ee-9592-663a43097439-kube-api-access-5cjls\") pod \"ssh-known-hosts-openstack-98vzd\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.141846 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.712567 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-98vzd"] Nov 26 09:03:08 crc kubenswrapper[4909]: I1126 09:03:08.727157 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-98vzd" event={"ID":"1ede74bc-82e7-45ee-9592-663a43097439","Type":"ContainerStarted","Data":"db9f37d3c2bb2af744a456d5c4e96162776c072faa4ec86cdfab7652f22ad3b5"} Nov 26 09:03:09 crc kubenswrapper[4909]: I1126 09:03:09.757132 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgd6m" event={"ID":"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb","Type":"ContainerStarted","Data":"382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd"} Nov 26 09:03:10 crc kubenswrapper[4909]: I1126 09:03:10.767304 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-98vzd" event={"ID":"1ede74bc-82e7-45ee-9592-663a43097439","Type":"ContainerStarted","Data":"f14775de1f48a0b39a1acfde8d4d5f466bee314a428ef9ef489965cf3837d9c7"} Nov 26 09:03:10 crc kubenswrapper[4909]: I1126 09:03:10.796208 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-98vzd" podStartSLOduration=2.9363557030000003 podStartE2EDuration="3.796182804s" podCreationTimestamp="2025-11-26 09:03:07 +0000 UTC" firstStartedPulling="2025-11-26 09:03:08.712895452 +0000 UTC m=+7360.859106628" lastFinishedPulling="2025-11-26 09:03:09.572722563 +0000 UTC m=+7361.718933729" observedRunningTime="2025-11-26 09:03:10.783522289 +0000 UTC m=+7362.929733465" watchObservedRunningTime="2025-11-26 09:03:10.796182804 +0000 UTC m=+7362.942394010" Nov 26 09:03:13 crc kubenswrapper[4909]: I1126 09:03:13.800536 4909 generic.go:334] "Generic (PLEG): container finished" podID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerID="382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd" exitCode=0 Nov 26 09:03:13 crc kubenswrapper[4909]: I1126 09:03:13.801045 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgd6m" event={"ID":"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb","Type":"ContainerDied","Data":"382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd"} Nov 26 09:03:14 crc kubenswrapper[4909]: I1126 09:03:14.814393 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgd6m" event={"ID":"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb","Type":"ContainerStarted","Data":"869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459"} Nov 26 09:03:14 crc kubenswrapper[4909]: I1126 09:03:14.838449 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jgd6m" podStartSLOduration=3.312427429 podStartE2EDuration="9.838427578s" podCreationTimestamp="2025-11-26 09:03:05 +0000 UTC" firstStartedPulling="2025-11-26 09:03:07.705382421 +0000 UTC m=+7359.851593587" lastFinishedPulling="2025-11-26 09:03:14.23138252 +0000 UTC m=+7366.377593736" observedRunningTime="2025-11-26 09:03:14.830552973 +0000 UTC m=+7366.976764139" watchObservedRunningTime="2025-11-26 09:03:14.838427578 +0000 UTC m=+7366.984638754" Nov 26 09:03:16 crc kubenswrapper[4909]: I1126 09:03:16.338301 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:16 crc kubenswrapper[4909]: I1126 09:03:16.338611 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:17 crc kubenswrapper[4909]: I1126 09:03:17.392343 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jgd6m" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="registry-server" probeResult="failure" output=< Nov 26 09:03:17 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 09:03:17 crc kubenswrapper[4909]: > Nov 26 09:03:18 crc kubenswrapper[4909]: I1126 09:03:18.887694 4909 generic.go:334] "Generic (PLEG): container finished" podID="1ede74bc-82e7-45ee-9592-663a43097439" containerID="f14775de1f48a0b39a1acfde8d4d5f466bee314a428ef9ef489965cf3837d9c7" exitCode=0 Nov 26 09:03:18 crc kubenswrapper[4909]: I1126 09:03:18.887995 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-98vzd" event={"ID":"1ede74bc-82e7-45ee-9592-663a43097439","Type":"ContainerDied","Data":"f14775de1f48a0b39a1acfde8d4d5f466bee314a428ef9ef489965cf3837d9c7"} Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.273247 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.417442 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ceph\") pod \"1ede74bc-82e7-45ee-9592-663a43097439\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.417567 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-inventory-0\") pod \"1ede74bc-82e7-45ee-9592-663a43097439\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.417811 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ssh-key-openstack-cell1\") pod \"1ede74bc-82e7-45ee-9592-663a43097439\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.417873 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cjls\" (UniqueName: \"kubernetes.io/projected/1ede74bc-82e7-45ee-9592-663a43097439-kube-api-access-5cjls\") pod \"1ede74bc-82e7-45ee-9592-663a43097439\" (UID: \"1ede74bc-82e7-45ee-9592-663a43097439\") " Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.423138 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ede74bc-82e7-45ee-9592-663a43097439-kube-api-access-5cjls" (OuterVolumeSpecName: "kube-api-access-5cjls") pod "1ede74bc-82e7-45ee-9592-663a43097439" (UID: "1ede74bc-82e7-45ee-9592-663a43097439"). InnerVolumeSpecName "kube-api-access-5cjls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.423185 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ceph" (OuterVolumeSpecName: "ceph") pod "1ede74bc-82e7-45ee-9592-663a43097439" (UID: "1ede74bc-82e7-45ee-9592-663a43097439"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.447955 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1ede74bc-82e7-45ee-9592-663a43097439" (UID: "1ede74bc-82e7-45ee-9592-663a43097439"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.450251 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1ede74bc-82e7-45ee-9592-663a43097439" (UID: "1ede74bc-82e7-45ee-9592-663a43097439"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.520718 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.520746 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cjls\" (UniqueName: \"kubernetes.io/projected/1ede74bc-82e7-45ee-9592-663a43097439-kube-api-access-5cjls\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.520755 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.520765 4909 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ede74bc-82e7-45ee-9592-663a43097439-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.915806 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-98vzd" event={"ID":"1ede74bc-82e7-45ee-9592-663a43097439","Type":"ContainerDied","Data":"db9f37d3c2bb2af744a456d5c4e96162776c072faa4ec86cdfab7652f22ad3b5"} Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.916549 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-98vzd" Nov 26 09:03:20 crc kubenswrapper[4909]: I1126 09:03:20.916567 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db9f37d3c2bb2af744a456d5c4e96162776c072faa4ec86cdfab7652f22ad3b5" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.002970 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-9lqqb"] Nov 26 09:03:21 crc kubenswrapper[4909]: E1126 09:03:21.003538 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ede74bc-82e7-45ee-9592-663a43097439" containerName="ssh-known-hosts-openstack" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.003556 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ede74bc-82e7-45ee-9592-663a43097439" containerName="ssh-known-hosts-openstack" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.007004 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ede74bc-82e7-45ee-9592-663a43097439" containerName="ssh-known-hosts-openstack" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.007795 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.013789 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.014045 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.017020 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.017513 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.043560 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-9lqqb"] Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.133129 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkt2\" (UniqueName: \"kubernetes.io/projected/31bd4baf-44de-4ad5-84cc-915eddf3a7da-kube-api-access-flkt2\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.133207 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-inventory\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.133326 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ceph\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.133397 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ssh-key\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.234946 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ceph\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.235053 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ssh-key\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.235123 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flkt2\" (UniqueName: \"kubernetes.io/projected/31bd4baf-44de-4ad5-84cc-915eddf3a7da-kube-api-access-flkt2\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.235162 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-inventory\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.239378 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ssh-key\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.239615 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-inventory\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.253310 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ceph\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.257682 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flkt2\" (UniqueName: \"kubernetes.io/projected/31bd4baf-44de-4ad5-84cc-915eddf3a7da-kube-api-access-flkt2\") pod \"run-os-openstack-openstack-cell1-9lqqb\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.346055 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:21 crc kubenswrapper[4909]: I1126 09:03:21.940873 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-9lqqb"] Nov 26 09:03:21 crc kubenswrapper[4909]: W1126 09:03:21.951617 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31bd4baf_44de_4ad5_84cc_915eddf3a7da.slice/crio-561bede9cfb67fe6e0cc5ae663c968dd0e10ec170eff2f0498a0f40a37d8d899 WatchSource:0}: Error finding container 561bede9cfb67fe6e0cc5ae663c968dd0e10ec170eff2f0498a0f40a37d8d899: Status 404 returned error can't find the container with id 561bede9cfb67fe6e0cc5ae663c968dd0e10ec170eff2f0498a0f40a37d8d899 Nov 26 09:03:22 crc kubenswrapper[4909]: I1126 09:03:22.937896 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" event={"ID":"31bd4baf-44de-4ad5-84cc-915eddf3a7da","Type":"ContainerStarted","Data":"9fc12217d7f3386e329a69ae7d5a21e4ab5ee58a4205b9f4235b60d7fbe6fdf8"} Nov 26 09:03:22 crc kubenswrapper[4909]: I1126 09:03:22.938252 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" event={"ID":"31bd4baf-44de-4ad5-84cc-915eddf3a7da","Type":"ContainerStarted","Data":"561bede9cfb67fe6e0cc5ae663c968dd0e10ec170eff2f0498a0f40a37d8d899"} Nov 26 09:03:22 crc kubenswrapper[4909]: I1126 09:03:22.964023 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" podStartSLOduration=2.503050552 podStartE2EDuration="2.964003465s" podCreationTimestamp="2025-11-26 09:03:20 +0000 UTC" firstStartedPulling="2025-11-26 09:03:21.953681268 +0000 UTC m=+7374.099892434" lastFinishedPulling="2025-11-26 09:03:22.414634181 +0000 UTC m=+7374.560845347" observedRunningTime="2025-11-26 09:03:22.957694382 +0000 UTC m=+7375.103905548" watchObservedRunningTime="2025-11-26 09:03:22.964003465 +0000 UTC m=+7375.110214631" Nov 26 09:03:26 crc kubenswrapper[4909]: I1126 09:03:26.411980 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:26 crc kubenswrapper[4909]: I1126 09:03:26.476039 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:26 crc kubenswrapper[4909]: I1126 09:03:26.648579 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgd6m"] Nov 26 09:03:27 crc kubenswrapper[4909]: I1126 09:03:27.985051 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jgd6m" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="registry-server" containerID="cri-o://869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459" gracePeriod=2 Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.521801 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.594007 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb9xv\" (UniqueName: \"kubernetes.io/projected/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-kube-api-access-xb9xv\") pod \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.594166 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-utilities\") pod \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.594208 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-catalog-content\") pod \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\" (UID: \"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb\") " Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.595863 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-utilities" (OuterVolumeSpecName: "utilities") pod "fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" (UID: "fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.602940 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-kube-api-access-xb9xv" (OuterVolumeSpecName: "kube-api-access-xb9xv") pod "fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" (UID: "fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb"). InnerVolumeSpecName "kube-api-access-xb9xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.697282 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb9xv\" (UniqueName: \"kubernetes.io/projected/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-kube-api-access-xb9xv\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.697313 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.699767 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" (UID: "fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.799079 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:28 crc kubenswrapper[4909]: I1126 09:03:28.999828 4909 generic.go:334] "Generic (PLEG): container finished" podID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerID="869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459" exitCode=0 Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:28.999991 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgd6m" event={"ID":"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb","Type":"ContainerDied","Data":"869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459"} Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.000021 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgd6m" event={"ID":"fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb","Type":"ContainerDied","Data":"2b43ea75467c8043e52f36dbfa6b82d5387299703e329c5afb1dccb4603e9837"} Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.000041 4909 scope.go:117] "RemoveContainer" containerID="869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.000235 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgd6m" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.046619 4909 scope.go:117] "RemoveContainer" containerID="382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.051806 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgd6m"] Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.065510 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jgd6m"] Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.072885 4909 scope.go:117] "RemoveContainer" containerID="2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.134934 4909 scope.go:117] "RemoveContainer" containerID="869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459" Nov 26 09:03:29 crc kubenswrapper[4909]: E1126 09:03:29.135462 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459\": container with ID starting with 869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459 not found: ID does not exist" containerID="869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.135511 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459"} err="failed to get container status \"869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459\": rpc error: code = NotFound desc = could not find container \"869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459\": container with ID starting with 869c1e16bbc27e880152dfccde03315c6554bc44a908b5ae4edbf5f121310459 not found: ID does not exist" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.135538 4909 scope.go:117] "RemoveContainer" containerID="382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd" Nov 26 09:03:29 crc kubenswrapper[4909]: E1126 09:03:29.135933 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd\": container with ID starting with 382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd not found: ID does not exist" containerID="382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.135979 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd"} err="failed to get container status \"382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd\": rpc error: code = NotFound desc = could not find container \"382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd\": container with ID starting with 382f6afc71d7293b72be50342ea1e7a8a0b123ba9564f934323fbf3954333cbd not found: ID does not exist" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.136007 4909 scope.go:117] "RemoveContainer" containerID="2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1" Nov 26 09:03:29 crc kubenswrapper[4909]: E1126 09:03:29.136426 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1\": container with ID starting with 2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1 not found: ID does not exist" containerID="2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1" Nov 26 09:03:29 crc kubenswrapper[4909]: I1126 09:03:29.136473 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1"} err="failed to get container status \"2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1\": rpc error: code = NotFound desc = could not find container \"2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1\": container with ID starting with 2a4902c8d170676b076d925ffd2fca538a50badbf9b6d8121867609331a50ba1 not found: ID does not exist" Nov 26 09:03:30 crc kubenswrapper[4909]: I1126 09:03:30.511634 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" path="/var/lib/kubelet/pods/fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb/volumes" Nov 26 09:03:31 crc kubenswrapper[4909]: I1126 09:03:31.034823 4909 generic.go:334] "Generic (PLEG): container finished" podID="31bd4baf-44de-4ad5-84cc-915eddf3a7da" containerID="9fc12217d7f3386e329a69ae7d5a21e4ab5ee58a4205b9f4235b60d7fbe6fdf8" exitCode=0 Nov 26 09:03:31 crc kubenswrapper[4909]: I1126 09:03:31.034964 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" event={"ID":"31bd4baf-44de-4ad5-84cc-915eddf3a7da","Type":"ContainerDied","Data":"9fc12217d7f3386e329a69ae7d5a21e4ab5ee58a4205b9f4235b60d7fbe6fdf8"} Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.628149 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.806752 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-inventory\") pod \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.806888 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ssh-key\") pod \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.807056 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ceph\") pod \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.807123 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flkt2\" (UniqueName: \"kubernetes.io/projected/31bd4baf-44de-4ad5-84cc-915eddf3a7da-kube-api-access-flkt2\") pod \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\" (UID: \"31bd4baf-44de-4ad5-84cc-915eddf3a7da\") " Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.814585 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ceph" (OuterVolumeSpecName: "ceph") pod "31bd4baf-44de-4ad5-84cc-915eddf3a7da" (UID: "31bd4baf-44de-4ad5-84cc-915eddf3a7da"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.814756 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd4baf-44de-4ad5-84cc-915eddf3a7da-kube-api-access-flkt2" (OuterVolumeSpecName: "kube-api-access-flkt2") pod "31bd4baf-44de-4ad5-84cc-915eddf3a7da" (UID: "31bd4baf-44de-4ad5-84cc-915eddf3a7da"). InnerVolumeSpecName "kube-api-access-flkt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.838682 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-inventory" (OuterVolumeSpecName: "inventory") pod "31bd4baf-44de-4ad5-84cc-915eddf3a7da" (UID: "31bd4baf-44de-4ad5-84cc-915eddf3a7da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.840965 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "31bd4baf-44de-4ad5-84cc-915eddf3a7da" (UID: "31bd4baf-44de-4ad5-84cc-915eddf3a7da"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.909460 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.909501 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flkt2\" (UniqueName: \"kubernetes.io/projected/31bd4baf-44de-4ad5-84cc-915eddf3a7da-kube-api-access-flkt2\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.909515 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:32 crc kubenswrapper[4909]: I1126 09:03:32.909524 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31bd4baf-44de-4ad5-84cc-915eddf3a7da-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.056837 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" event={"ID":"31bd4baf-44de-4ad5-84cc-915eddf3a7da","Type":"ContainerDied","Data":"561bede9cfb67fe6e0cc5ae663c968dd0e10ec170eff2f0498a0f40a37d8d899"} Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.056871 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561bede9cfb67fe6e0cc5ae663c968dd0e10ec170eff2f0498a0f40a37d8d899" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.056897 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9lqqb" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.127318 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-vsfdl"] Nov 26 09:03:33 crc kubenswrapper[4909]: E1126 09:03:33.128015 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="registry-server" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.128091 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="registry-server" Nov 26 09:03:33 crc kubenswrapper[4909]: E1126 09:03:33.128167 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="extract-utilities" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.128214 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="extract-utilities" Nov 26 09:03:33 crc kubenswrapper[4909]: E1126 09:03:33.128270 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bd4baf-44de-4ad5-84cc-915eddf3a7da" containerName="run-os-openstack-openstack-cell1" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.128318 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bd4baf-44de-4ad5-84cc-915eddf3a7da" containerName="run-os-openstack-openstack-cell1" Nov 26 09:03:33 crc kubenswrapper[4909]: E1126 09:03:33.128393 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="extract-content" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.128450 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="extract-content" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.128715 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbec73aa-6098-4b5d-a79d-64c1e0e6b8fb" containerName="registry-server" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.128782 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bd4baf-44de-4ad5-84cc-915eddf3a7da" containerName="run-os-openstack-openstack-cell1" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.129657 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.131757 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.131838 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.131838 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.132195 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.138383 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-vsfdl"] Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.215567 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9d2\" (UniqueName: \"kubernetes.io/projected/21301b54-6aca-4911-a8d3-1b346e9ae2c1-kube-api-access-tv9d2\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.215688 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-inventory\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.215818 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.215895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ceph\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.317881 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-inventory\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.318050 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.318132 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ceph\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.318230 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9d2\" (UniqueName: \"kubernetes.io/projected/21301b54-6aca-4911-a8d3-1b346e9ae2c1-kube-api-access-tv9d2\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.322049 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.322053 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ceph\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.333562 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-inventory\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.336545 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9d2\" (UniqueName: \"kubernetes.io/projected/21301b54-6aca-4911-a8d3-1b346e9ae2c1-kube-api-access-tv9d2\") pod \"reboot-os-openstack-openstack-cell1-vsfdl\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:33 crc kubenswrapper[4909]: I1126 09:03:33.446126 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:34 crc kubenswrapper[4909]: I1126 09:03:34.032669 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-vsfdl"] Nov 26 09:03:34 crc kubenswrapper[4909]: I1126 09:03:34.068047 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" event={"ID":"21301b54-6aca-4911-a8d3-1b346e9ae2c1","Type":"ContainerStarted","Data":"a47e28fbb4c9feff4ec43a1256204447630453f4e4d4d9600e7fd66d7b7b37f2"} Nov 26 09:03:36 crc kubenswrapper[4909]: I1126 09:03:36.095973 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" event={"ID":"21301b54-6aca-4911-a8d3-1b346e9ae2c1","Type":"ContainerStarted","Data":"1f2cd8ead5694386626cc3715f614d94b7fd4b50e2ebdb65a8706cd90eea3174"} Nov 26 09:03:51 crc kubenswrapper[4909]: I1126 09:03:51.280439 4909 generic.go:334] "Generic (PLEG): container finished" podID="21301b54-6aca-4911-a8d3-1b346e9ae2c1" containerID="1f2cd8ead5694386626cc3715f614d94b7fd4b50e2ebdb65a8706cd90eea3174" exitCode=0 Nov 26 09:03:51 crc kubenswrapper[4909]: I1126 09:03:51.280518 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" event={"ID":"21301b54-6aca-4911-a8d3-1b346e9ae2c1","Type":"ContainerDied","Data":"1f2cd8ead5694386626cc3715f614d94b7fd4b50e2ebdb65a8706cd90eea3174"} Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.788472 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.877545 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ceph\") pod \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.877657 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ssh-key\") pod \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.883984 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ceph" (OuterVolumeSpecName: "ceph") pod "21301b54-6aca-4911-a8d3-1b346e9ae2c1" (UID: "21301b54-6aca-4911-a8d3-1b346e9ae2c1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.911637 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21301b54-6aca-4911-a8d3-1b346e9ae2c1" (UID: "21301b54-6aca-4911-a8d3-1b346e9ae2c1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.979436 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv9d2\" (UniqueName: \"kubernetes.io/projected/21301b54-6aca-4911-a8d3-1b346e9ae2c1-kube-api-access-tv9d2\") pod \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.979511 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-inventory\") pod \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\" (UID: \"21301b54-6aca-4911-a8d3-1b346e9ae2c1\") " Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.980031 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.980060 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:52 crc kubenswrapper[4909]: I1126 09:03:52.983488 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21301b54-6aca-4911-a8d3-1b346e9ae2c1-kube-api-access-tv9d2" (OuterVolumeSpecName: "kube-api-access-tv9d2") pod "21301b54-6aca-4911-a8d3-1b346e9ae2c1" (UID: "21301b54-6aca-4911-a8d3-1b346e9ae2c1"). InnerVolumeSpecName "kube-api-access-tv9d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.006786 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-inventory" (OuterVolumeSpecName: "inventory") pod "21301b54-6aca-4911-a8d3-1b346e9ae2c1" (UID: "21301b54-6aca-4911-a8d3-1b346e9ae2c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.081527 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv9d2\" (UniqueName: \"kubernetes.io/projected/21301b54-6aca-4911-a8d3-1b346e9ae2c1-kube-api-access-tv9d2\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.081566 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21301b54-6aca-4911-a8d3-1b346e9ae2c1-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.304416 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" event={"ID":"21301b54-6aca-4911-a8d3-1b346e9ae2c1","Type":"ContainerDied","Data":"a47e28fbb4c9feff4ec43a1256204447630453f4e4d4d9600e7fd66d7b7b37f2"} Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.304478 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47e28fbb4c9feff4ec43a1256204447630453f4e4d4d9600e7fd66d7b7b37f2" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.304418 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-vsfdl" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.418653 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-vwl5m"] Nov 26 09:03:53 crc kubenswrapper[4909]: E1126 09:03:53.419222 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21301b54-6aca-4911-a8d3-1b346e9ae2c1" containerName="reboot-os-openstack-openstack-cell1" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.419243 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="21301b54-6aca-4911-a8d3-1b346e9ae2c1" containerName="reboot-os-openstack-openstack-cell1" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.419476 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="21301b54-6aca-4911-a8d3-1b346e9ae2c1" containerName="reboot-os-openstack-openstack-cell1" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.420446 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.422282 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.422420 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.425315 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.425582 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.431336 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-vwl5m"] Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489132 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489180 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489212 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489332 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ssh-key\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489572 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ceph\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489738 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbvfp\" (UniqueName: \"kubernetes.io/projected/62dd5e07-614e-4604-a806-0464413c77f5-kube-api-access-tbvfp\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489788 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489854 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489878 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-inventory\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489912 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.489965 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.490099 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592467 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592558 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592582 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592629 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592692 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ssh-key\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592741 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ceph\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592791 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbvfp\" (UniqueName: \"kubernetes.io/projected/62dd5e07-614e-4604-a806-0464413c77f5-kube-api-access-tbvfp\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592811 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592834 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592857 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-inventory\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592884 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.592911 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.597037 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.597449 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-inventory\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.597686 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.598024 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.598127 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.599225 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ceph\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.599806 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ssh-key\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.600019 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.600858 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.602091 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.602416 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.609518 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbvfp\" (UniqueName: \"kubernetes.io/projected/62dd5e07-614e-4604-a806-0464413c77f5-kube-api-access-tbvfp\") pod \"install-certs-openstack-openstack-cell1-vwl5m\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:53 crc kubenswrapper[4909]: I1126 09:03:53.742520 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:03:54 crc kubenswrapper[4909]: I1126 09:03:54.307796 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-vwl5m"] Nov 26 09:03:55 crc kubenswrapper[4909]: I1126 09:03:55.328913 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" event={"ID":"62dd5e07-614e-4604-a806-0464413c77f5","Type":"ContainerStarted","Data":"15302450dad438073b6cc903eb5a06e05144162eb826386d0fcaafd8ac967a7c"} Nov 26 09:03:55 crc kubenswrapper[4909]: I1126 09:03:55.330382 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" event={"ID":"62dd5e07-614e-4604-a806-0464413c77f5","Type":"ContainerStarted","Data":"c79713713ff5a27d97237d1b35bde6717742adc1d575c26c4d9f213a22306e54"} Nov 26 09:03:55 crc kubenswrapper[4909]: I1126 09:03:55.349969 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" podStartSLOduration=1.918371621 podStartE2EDuration="2.349945782s" podCreationTimestamp="2025-11-26 09:03:53 +0000 UTC" firstStartedPulling="2025-11-26 09:03:54.318751426 +0000 UTC m=+7406.464962592" lastFinishedPulling="2025-11-26 09:03:54.750325597 +0000 UTC m=+7406.896536753" observedRunningTime="2025-11-26 09:03:55.34507586 +0000 UTC m=+7407.491287026" watchObservedRunningTime="2025-11-26 09:03:55.349945782 +0000 UTC m=+7407.496156958" Nov 26 09:04:14 crc kubenswrapper[4909]: I1126 09:04:14.532174 4909 generic.go:334] "Generic (PLEG): container finished" podID="62dd5e07-614e-4604-a806-0464413c77f5" containerID="15302450dad438073b6cc903eb5a06e05144162eb826386d0fcaafd8ac967a7c" exitCode=0 Nov 26 09:04:14 crc kubenswrapper[4909]: I1126 09:04:14.532300 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" event={"ID":"62dd5e07-614e-4604-a806-0464413c77f5","Type":"ContainerDied","Data":"15302450dad438073b6cc903eb5a06e05144162eb826386d0fcaafd8ac967a7c"} Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.061526 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.184930 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-sriov-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.185069 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ceph\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.185091 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-libvirt-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.185131 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-bootstrap-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.185160 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-metadata-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.185184 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbvfp\" (UniqueName: \"kubernetes.io/projected/62dd5e07-614e-4604-a806-0464413c77f5-kube-api-access-tbvfp\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.185970 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-inventory\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.186024 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-dhcp-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.186058 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-telemetry-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.186075 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ssh-key\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.186094 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-nova-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.186146 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ovn-combined-ca-bundle\") pod \"62dd5e07-614e-4604-a806-0464413c77f5\" (UID: \"62dd5e07-614e-4604-a806-0464413c77f5\") " Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.193347 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.193729 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.193764 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ceph" (OuterVolumeSpecName: "ceph") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.193799 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.193918 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.194097 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.194277 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.194799 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.195585 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62dd5e07-614e-4604-a806-0464413c77f5-kube-api-access-tbvfp" (OuterVolumeSpecName: "kube-api-access-tbvfp") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "kube-api-access-tbvfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.198694 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.230807 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-inventory" (OuterVolumeSpecName: "inventory") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.236666 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "62dd5e07-614e-4604-a806-0464413c77f5" (UID: "62dd5e07-614e-4604-a806-0464413c77f5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288747 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288782 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288797 4909 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288826 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288838 4909 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288847 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288855 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288865 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288873 4909 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288882 4909 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288910 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd5e07-614e-4604-a806-0464413c77f5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.288945 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbvfp\" (UniqueName: \"kubernetes.io/projected/62dd5e07-614e-4604-a806-0464413c77f5-kube-api-access-tbvfp\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.553812 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" event={"ID":"62dd5e07-614e-4604-a806-0464413c77f5","Type":"ContainerDied","Data":"c79713713ff5a27d97237d1b35bde6717742adc1d575c26c4d9f213a22306e54"} Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.553859 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c79713713ff5a27d97237d1b35bde6717742adc1d575c26c4d9f213a22306e54" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.553927 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-vwl5m" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.694777 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-rnmcj"] Nov 26 09:04:16 crc kubenswrapper[4909]: E1126 09:04:16.695254 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dd5e07-614e-4604-a806-0464413c77f5" containerName="install-certs-openstack-openstack-cell1" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.695274 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dd5e07-614e-4604-a806-0464413c77f5" containerName="install-certs-openstack-openstack-cell1" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.695494 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="62dd5e07-614e-4604-a806-0464413c77f5" containerName="install-certs-openstack-openstack-cell1" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.696389 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.706531 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.706980 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.707978 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.708774 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.723904 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-rnmcj"] Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.799061 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.799495 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ceph\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.799673 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-inventory\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.799799 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7lnv\" (UniqueName: \"kubernetes.io/projected/097830ef-7c28-40bd-b183-d395c23b463c-kube-api-access-x7lnv\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.901643 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ceph\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.901726 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-inventory\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.901763 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7lnv\" (UniqueName: \"kubernetes.io/projected/097830ef-7c28-40bd-b183-d395c23b463c-kube-api-access-x7lnv\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.901799 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.907094 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-inventory\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.907404 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.908863 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ceph\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:16 crc kubenswrapper[4909]: I1126 09:04:16.919570 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7lnv\" (UniqueName: \"kubernetes.io/projected/097830ef-7c28-40bd-b183-d395c23b463c-kube-api-access-x7lnv\") pod \"ceph-client-openstack-openstack-cell1-rnmcj\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:17 crc kubenswrapper[4909]: I1126 09:04:17.016519 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:17 crc kubenswrapper[4909]: I1126 09:04:17.549391 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-rnmcj"] Nov 26 09:04:17 crc kubenswrapper[4909]: I1126 09:04:17.573332 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" event={"ID":"097830ef-7c28-40bd-b183-d395c23b463c","Type":"ContainerStarted","Data":"22fbe84ea09f33b8d2ec5524baa232dde8d664b63ac8155e3b28f210d144beca"} Nov 26 09:04:20 crc kubenswrapper[4909]: I1126 09:04:20.605164 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" event={"ID":"097830ef-7c28-40bd-b183-d395c23b463c","Type":"ContainerStarted","Data":"29ef1926a40a06d9fda8366d0e0c78720bfc1049980d8f19d4a55785046e947f"} Nov 26 09:04:20 crc kubenswrapper[4909]: I1126 09:04:20.628869 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" podStartSLOduration=2.440402282 podStartE2EDuration="4.628851742s" podCreationTimestamp="2025-11-26 09:04:16 +0000 UTC" firstStartedPulling="2025-11-26 09:04:17.558253791 +0000 UTC m=+7429.704464957" lastFinishedPulling="2025-11-26 09:04:19.746703241 +0000 UTC m=+7431.892914417" observedRunningTime="2025-11-26 09:04:20.621416659 +0000 UTC m=+7432.767627845" watchObservedRunningTime="2025-11-26 09:04:20.628851742 +0000 UTC m=+7432.775062908" Nov 26 09:04:25 crc kubenswrapper[4909]: I1126 09:04:25.657111 4909 generic.go:334] "Generic (PLEG): container finished" podID="097830ef-7c28-40bd-b183-d395c23b463c" containerID="29ef1926a40a06d9fda8366d0e0c78720bfc1049980d8f19d4a55785046e947f" exitCode=0 Nov 26 09:04:25 crc kubenswrapper[4909]: I1126 09:04:25.657159 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" event={"ID":"097830ef-7c28-40bd-b183-d395c23b463c","Type":"ContainerDied","Data":"29ef1926a40a06d9fda8366d0e0c78720bfc1049980d8f19d4a55785046e947f"} Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.307520 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.463701 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ceph\") pod \"097830ef-7c28-40bd-b183-d395c23b463c\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.463779 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7lnv\" (UniqueName: \"kubernetes.io/projected/097830ef-7c28-40bd-b183-d395c23b463c-kube-api-access-x7lnv\") pod \"097830ef-7c28-40bd-b183-d395c23b463c\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.463958 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ssh-key\") pod \"097830ef-7c28-40bd-b183-d395c23b463c\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.464044 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-inventory\") pod \"097830ef-7c28-40bd-b183-d395c23b463c\" (UID: \"097830ef-7c28-40bd-b183-d395c23b463c\") " Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.470342 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097830ef-7c28-40bd-b183-d395c23b463c-kube-api-access-x7lnv" (OuterVolumeSpecName: "kube-api-access-x7lnv") pod "097830ef-7c28-40bd-b183-d395c23b463c" (UID: "097830ef-7c28-40bd-b183-d395c23b463c"). InnerVolumeSpecName "kube-api-access-x7lnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.470464 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ceph" (OuterVolumeSpecName: "ceph") pod "097830ef-7c28-40bd-b183-d395c23b463c" (UID: "097830ef-7c28-40bd-b183-d395c23b463c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.503729 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-inventory" (OuterVolumeSpecName: "inventory") pod "097830ef-7c28-40bd-b183-d395c23b463c" (UID: "097830ef-7c28-40bd-b183-d395c23b463c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.508281 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "097830ef-7c28-40bd-b183-d395c23b463c" (UID: "097830ef-7c28-40bd-b183-d395c23b463c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.567275 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.567321 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7lnv\" (UniqueName: \"kubernetes.io/projected/097830ef-7c28-40bd-b183-d395c23b463c-kube-api-access-x7lnv\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.567337 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.567350 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/097830ef-7c28-40bd-b183-d395c23b463c-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.683670 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" event={"ID":"097830ef-7c28-40bd-b183-d395c23b463c","Type":"ContainerDied","Data":"22fbe84ea09f33b8d2ec5524baa232dde8d664b63ac8155e3b28f210d144beca"} Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.683718 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-rnmcj" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.683723 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22fbe84ea09f33b8d2ec5524baa232dde8d664b63ac8155e3b28f210d144beca" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.774051 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-7gzdn"] Nov 26 09:04:27 crc kubenswrapper[4909]: E1126 09:04:27.774573 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097830ef-7c28-40bd-b183-d395c23b463c" containerName="ceph-client-openstack-openstack-cell1" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.774838 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="097830ef-7c28-40bd-b183-d395c23b463c" containerName="ceph-client-openstack-openstack-cell1" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.775185 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="097830ef-7c28-40bd-b183-d395c23b463c" containerName="ceph-client-openstack-openstack-cell1" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.776031 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.782079 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.782114 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.782151 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.782226 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.783187 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.796334 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-7gzdn"] Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.872934 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ssh-key\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.872988 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wbs\" (UniqueName: \"kubernetes.io/projected/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-kube-api-access-k2wbs\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.873026 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ceph\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.873079 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-inventory\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.873166 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.873226 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.974896 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.974966 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.975035 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ssh-key\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.975071 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wbs\" (UniqueName: \"kubernetes.io/projected/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-kube-api-access-k2wbs\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.975111 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ceph\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.975183 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-inventory\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.975961 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.978866 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-inventory\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.979062 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ceph\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.979320 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ssh-key\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.988338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:27 crc kubenswrapper[4909]: I1126 09:04:27.993694 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wbs\" (UniqueName: \"kubernetes.io/projected/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-kube-api-access-k2wbs\") pod \"ovn-openstack-openstack-cell1-7gzdn\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:28 crc kubenswrapper[4909]: I1126 09:04:28.095198 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:04:28 crc kubenswrapper[4909]: I1126 09:04:28.675653 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-7gzdn"] Nov 26 09:04:28 crc kubenswrapper[4909]: W1126 09:04:28.678437 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podede0bcc4_4c9a_43fb_b6f6_c32aa1f43e4f.slice/crio-37d5fd1b698f96467814bb291a3777f95a76aa598efaabc4cd2a07ea36e4f634 WatchSource:0}: Error finding container 37d5fd1b698f96467814bb291a3777f95a76aa598efaabc4cd2a07ea36e4f634: Status 404 returned error can't find the container with id 37d5fd1b698f96467814bb291a3777f95a76aa598efaabc4cd2a07ea36e4f634 Nov 26 09:04:28 crc kubenswrapper[4909]: I1126 09:04:28.694367 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" event={"ID":"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f","Type":"ContainerStarted","Data":"37d5fd1b698f96467814bb291a3777f95a76aa598efaabc4cd2a07ea36e4f634"} Nov 26 09:04:29 crc kubenswrapper[4909]: I1126 09:04:29.597261 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:04:30 crc kubenswrapper[4909]: I1126 09:04:30.729627 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" event={"ID":"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f","Type":"ContainerStarted","Data":"6a5a68374c11fb9aa80d5048da616db84814f753538b7f5fcb4aa5bbd0e08351"} Nov 26 09:04:30 crc kubenswrapper[4909]: I1126 09:04:30.758087 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" podStartSLOduration=2.843704991 podStartE2EDuration="3.758068071s" podCreationTimestamp="2025-11-26 09:04:27 +0000 UTC" firstStartedPulling="2025-11-26 09:04:28.680495314 +0000 UTC m=+7440.826706480" lastFinishedPulling="2025-11-26 09:04:29.594858394 +0000 UTC m=+7441.741069560" observedRunningTime="2025-11-26 09:04:30.746878316 +0000 UTC m=+7442.893089512" watchObservedRunningTime="2025-11-26 09:04:30.758068071 +0000 UTC m=+7442.904279237" Nov 26 09:05:07 crc kubenswrapper[4909]: I1126 09:05:07.301757 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:05:07 crc kubenswrapper[4909]: I1126 09:05:07.302333 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:05:37 crc kubenswrapper[4909]: I1126 09:05:37.300644 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:05:37 crc kubenswrapper[4909]: I1126 09:05:37.301234 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:05:37 crc kubenswrapper[4909]: I1126 09:05:37.448580 4909 generic.go:334] "Generic (PLEG): container finished" podID="ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" containerID="6a5a68374c11fb9aa80d5048da616db84814f753538b7f5fcb4aa5bbd0e08351" exitCode=0 Nov 26 09:05:37 crc kubenswrapper[4909]: I1126 09:05:37.448632 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" event={"ID":"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f","Type":"ContainerDied","Data":"6a5a68374c11fb9aa80d5048da616db84814f753538b7f5fcb4aa5bbd0e08351"} Nov 26 09:05:38 crc kubenswrapper[4909]: I1126 09:05:38.952632 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:05:38 crc kubenswrapper[4909]: I1126 09:05:38.997526 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2wbs\" (UniqueName: \"kubernetes.io/projected/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-kube-api-access-k2wbs\") pod \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " Nov 26 09:05:38 crc kubenswrapper[4909]: I1126 09:05:38.997682 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-inventory\") pod \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " Nov 26 09:05:38 crc kubenswrapper[4909]: I1126 09:05:38.997761 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ssh-key\") pod \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " Nov 26 09:05:38 crc kubenswrapper[4909]: I1126 09:05:38.997782 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ceph\") pod \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " Nov 26 09:05:38 crc kubenswrapper[4909]: I1126 09:05:38.997835 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovn-combined-ca-bundle\") pod \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " Nov 26 09:05:38 crc kubenswrapper[4909]: I1126 09:05:38.997958 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovncontroller-config-0\") pod \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\" (UID: \"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f\") " Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.004638 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ceph" (OuterVolumeSpecName: "ceph") pod "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" (UID: "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.004715 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" (UID: "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.006797 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-kube-api-access-k2wbs" (OuterVolumeSpecName: "kube-api-access-k2wbs") pod "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" (UID: "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f"). InnerVolumeSpecName "kube-api-access-k2wbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.036386 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" (UID: "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.039241 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-inventory" (OuterVolumeSpecName: "inventory") pod "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" (UID: "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.044302 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" (UID: "ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.100733 4909 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.100762 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2wbs\" (UniqueName: \"kubernetes.io/projected/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-kube-api-access-k2wbs\") on node \"crc\" DevicePath \"\"" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.100772 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.100781 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.100791 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.100800 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.481011 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" event={"ID":"ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f","Type":"ContainerDied","Data":"37d5fd1b698f96467814bb291a3777f95a76aa598efaabc4cd2a07ea36e4f634"} Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.481061 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d5fd1b698f96467814bb291a3777f95a76aa598efaabc4cd2a07ea36e4f634" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.481127 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-7gzdn" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.582496 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8jtc5"] Nov 26 09:05:39 crc kubenswrapper[4909]: E1126 09:05:39.582961 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" containerName="ovn-openstack-openstack-cell1" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.582978 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" containerName="ovn-openstack-openstack-cell1" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.583222 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f" containerName="ovn-openstack-openstack-cell1" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.583982 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.586864 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.587021 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.587107 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.587287 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.588268 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.588866 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.602729 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8jtc5"] Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.614553 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.614764 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.615031 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.615072 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.615123 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxhs\" (UniqueName: \"kubernetes.io/projected/06f085fc-7566-4e13-8b58-9d2385e57def-kube-api-access-tnxhs\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.615195 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.615272 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.716185 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.716232 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.716260 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnxhs\" (UniqueName: \"kubernetes.io/projected/06f085fc-7566-4e13-8b58-9d2385e57def-kube-api-access-tnxhs\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.716292 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.716325 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.716360 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.716416 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.720794 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.721254 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.721338 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.721621 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.722102 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.727504 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.743894 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnxhs\" (UniqueName: \"kubernetes.io/projected/06f085fc-7566-4e13-8b58-9d2385e57def-kube-api-access-tnxhs\") pod \"neutron-metadata-openstack-openstack-cell1-8jtc5\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:39 crc kubenswrapper[4909]: I1126 09:05:39.917186 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:05:40 crc kubenswrapper[4909]: I1126 09:05:40.522043 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8jtc5"] Nov 26 09:05:41 crc kubenswrapper[4909]: I1126 09:05:41.498621 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" event={"ID":"06f085fc-7566-4e13-8b58-9d2385e57def","Type":"ContainerStarted","Data":"9b29155b3116701fad5b371dbbac25c85be02b1ca4c76fe854cf901dfb7f60f1"} Nov 26 09:05:41 crc kubenswrapper[4909]: I1126 09:05:41.499218 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" event={"ID":"06f085fc-7566-4e13-8b58-9d2385e57def","Type":"ContainerStarted","Data":"16b7859825b2debcf09346f5409a26245f94e07a0c548992e8b080763a90a660"} Nov 26 09:05:41 crc kubenswrapper[4909]: I1126 09:05:41.527472 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" podStartSLOduration=1.9590357809999999 podStartE2EDuration="2.527453735s" podCreationTimestamp="2025-11-26 09:05:39 +0000 UTC" firstStartedPulling="2025-11-26 09:05:40.516417748 +0000 UTC m=+7512.662628904" lastFinishedPulling="2025-11-26 09:05:41.084835692 +0000 UTC m=+7513.231046858" observedRunningTime="2025-11-26 09:05:41.517212006 +0000 UTC m=+7513.663423172" watchObservedRunningTime="2025-11-26 09:05:41.527453735 +0000 UTC m=+7513.673664901" Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.301035 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.301631 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.301682 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.302449 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9485f7f07079fe5373a349cfe3c947df8d42987ac57bf67685a11bc4c748e707"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.302517 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://9485f7f07079fe5373a349cfe3c947df8d42987ac57bf67685a11bc4c748e707" gracePeriod=600 Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.767265 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="9485f7f07079fe5373a349cfe3c947df8d42987ac57bf67685a11bc4c748e707" exitCode=0 Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.767313 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"9485f7f07079fe5373a349cfe3c947df8d42987ac57bf67685a11bc4c748e707"} Nov 26 09:06:07 crc kubenswrapper[4909]: I1126 09:06:07.767345 4909 scope.go:117] "RemoveContainer" containerID="514efaa51c235ff548fd4f2c02c95fd72c4c9c3b799772523445d171b31894b7" Nov 26 09:06:08 crc kubenswrapper[4909]: I1126 09:06:08.779162 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4"} Nov 26 09:06:38 crc kubenswrapper[4909]: I1126 09:06:38.084068 4909 generic.go:334] "Generic (PLEG): container finished" podID="06f085fc-7566-4e13-8b58-9d2385e57def" containerID="9b29155b3116701fad5b371dbbac25c85be02b1ca4c76fe854cf901dfb7f60f1" exitCode=0 Nov 26 09:06:38 crc kubenswrapper[4909]: I1126 09:06:38.084903 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" event={"ID":"06f085fc-7566-4e13-8b58-9d2385e57def","Type":"ContainerDied","Data":"9b29155b3116701fad5b371dbbac25c85be02b1ca4c76fe854cf901dfb7f60f1"} Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.614049 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.665196 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-inventory\") pod \"06f085fc-7566-4e13-8b58-9d2385e57def\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.665671 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnxhs\" (UniqueName: \"kubernetes.io/projected/06f085fc-7566-4e13-8b58-9d2385e57def-kube-api-access-tnxhs\") pod \"06f085fc-7566-4e13-8b58-9d2385e57def\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.665875 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ssh-key\") pod \"06f085fc-7566-4e13-8b58-9d2385e57def\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.666076 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-metadata-combined-ca-bundle\") pod \"06f085fc-7566-4e13-8b58-9d2385e57def\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.666405 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ceph\") pod \"06f085fc-7566-4e13-8b58-9d2385e57def\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.666623 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-ovn-metadata-agent-neutron-config-0\") pod \"06f085fc-7566-4e13-8b58-9d2385e57def\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.667058 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-nova-metadata-neutron-config-0\") pod \"06f085fc-7566-4e13-8b58-9d2385e57def\" (UID: \"06f085fc-7566-4e13-8b58-9d2385e57def\") " Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.670777 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "06f085fc-7566-4e13-8b58-9d2385e57def" (UID: "06f085fc-7566-4e13-8b58-9d2385e57def"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.696077 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "06f085fc-7566-4e13-8b58-9d2385e57def" (UID: "06f085fc-7566-4e13-8b58-9d2385e57def"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.704754 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f085fc-7566-4e13-8b58-9d2385e57def-kube-api-access-tnxhs" (OuterVolumeSpecName: "kube-api-access-tnxhs") pod "06f085fc-7566-4e13-8b58-9d2385e57def" (UID: "06f085fc-7566-4e13-8b58-9d2385e57def"). InnerVolumeSpecName "kube-api-access-tnxhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.705872 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ceph" (OuterVolumeSpecName: "ceph") pod "06f085fc-7566-4e13-8b58-9d2385e57def" (UID: "06f085fc-7566-4e13-8b58-9d2385e57def"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.706785 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-inventory" (OuterVolumeSpecName: "inventory") pod "06f085fc-7566-4e13-8b58-9d2385e57def" (UID: "06f085fc-7566-4e13-8b58-9d2385e57def"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.708740 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06f085fc-7566-4e13-8b58-9d2385e57def" (UID: "06f085fc-7566-4e13-8b58-9d2385e57def"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.729866 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "06f085fc-7566-4e13-8b58-9d2385e57def" (UID: "06f085fc-7566-4e13-8b58-9d2385e57def"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.771247 4909 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.771548 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.771562 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnxhs\" (UniqueName: \"kubernetes.io/projected/06f085fc-7566-4e13-8b58-9d2385e57def-kube-api-access-tnxhs\") on node \"crc\" DevicePath \"\"" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.771573 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.771582 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.771603 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:06:39 crc kubenswrapper[4909]: I1126 09:06:39.771613 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06f085fc-7566-4e13-8b58-9d2385e57def-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.114893 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" event={"ID":"06f085fc-7566-4e13-8b58-9d2385e57def","Type":"ContainerDied","Data":"16b7859825b2debcf09346f5409a26245f94e07a0c548992e8b080763a90a660"} Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.115156 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16b7859825b2debcf09346f5409a26245f94e07a0c548992e8b080763a90a660" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.114983 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jtc5" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.229538 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-7zc8t"] Nov 26 09:06:40 crc kubenswrapper[4909]: E1126 09:06:40.230072 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f085fc-7566-4e13-8b58-9d2385e57def" containerName="neutron-metadata-openstack-openstack-cell1" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.230094 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f085fc-7566-4e13-8b58-9d2385e57def" containerName="neutron-metadata-openstack-openstack-cell1" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.230500 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f085fc-7566-4e13-8b58-9d2385e57def" containerName="neutron-metadata-openstack-openstack-cell1" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.231314 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.234310 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.234525 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.234732 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.234967 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.235395 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.245723 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-7zc8t"] Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.282845 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.282895 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-inventory\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.282960 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ssh-key\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.282982 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.283028 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svngj\" (UniqueName: \"kubernetes.io/projected/c12a232c-8572-40da-bd58-1f46eab0d5b4-kube-api-access-svngj\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.283099 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ceph\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.384578 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ceph\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.384702 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.384729 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-inventory\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.384781 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ssh-key\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.384804 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.384846 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svngj\" (UniqueName: \"kubernetes.io/projected/c12a232c-8572-40da-bd58-1f46eab0d5b4-kube-api-access-svngj\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.388341 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ceph\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.391140 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.392153 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-inventory\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.395291 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ssh-key\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.403019 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.409501 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svngj\" (UniqueName: \"kubernetes.io/projected/c12a232c-8572-40da-bd58-1f46eab0d5b4-kube-api-access-svngj\") pod \"libvirt-openstack-openstack-cell1-7zc8t\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:40 crc kubenswrapper[4909]: I1126 09:06:40.559129 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:06:41 crc kubenswrapper[4909]: I1126 09:06:41.129464 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-7zc8t"] Nov 26 09:06:41 crc kubenswrapper[4909]: I1126 09:06:41.142268 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:06:42 crc kubenswrapper[4909]: I1126 09:06:42.164776 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" event={"ID":"c12a232c-8572-40da-bd58-1f46eab0d5b4","Type":"ContainerStarted","Data":"cf66d26f7c71e22ebb2eed14c084bc3ff0bac16bcece82adc6763b1edcd6e440"} Nov 26 09:06:42 crc kubenswrapper[4909]: I1126 09:06:42.165066 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" event={"ID":"c12a232c-8572-40da-bd58-1f46eab0d5b4","Type":"ContainerStarted","Data":"a412bd0b36bdc1b0d14a1ff8a119fd2f18bc1828a04e423bf52bc3745d61e79d"} Nov 26 09:06:42 crc kubenswrapper[4909]: I1126 09:06:42.190645 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" podStartSLOduration=1.705309916 podStartE2EDuration="2.190628403s" podCreationTimestamp="2025-11-26 09:06:40 +0000 UTC" firstStartedPulling="2025-11-26 09:06:41.1419413 +0000 UTC m=+7573.288152476" lastFinishedPulling="2025-11-26 09:06:41.627259777 +0000 UTC m=+7573.773470963" observedRunningTime="2025-11-26 09:06:42.183534709 +0000 UTC m=+7574.329745875" watchObservedRunningTime="2025-11-26 09:06:42.190628403 +0000 UTC m=+7574.336839569" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.517739 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wv8w2"] Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.520956 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv8w2"] Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.521071 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.710100 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-catalog-content\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.710907 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j257f\" (UniqueName: \"kubernetes.io/projected/c00fbda8-925f-4368-8879-f2ddbfc19375-kube-api-access-j257f\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.711046 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-utilities\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.813245 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j257f\" (UniqueName: \"kubernetes.io/projected/c00fbda8-925f-4368-8879-f2ddbfc19375-kube-api-access-j257f\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.813315 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-utilities\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.813402 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-catalog-content\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.813969 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-catalog-content\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.814238 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-utilities\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.835834 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j257f\" (UniqueName: \"kubernetes.io/projected/c00fbda8-925f-4368-8879-f2ddbfc19375-kube-api-access-j257f\") pod \"redhat-marketplace-wv8w2\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:18 crc kubenswrapper[4909]: I1126 09:08:18.853520 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:19 crc kubenswrapper[4909]: I1126 09:08:19.368609 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv8w2"] Nov 26 09:08:20 crc kubenswrapper[4909]: I1126 09:08:20.255679 4909 generic.go:334] "Generic (PLEG): container finished" podID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerID="c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1" exitCode=0 Nov 26 09:08:20 crc kubenswrapper[4909]: I1126 09:08:20.255747 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv8w2" event={"ID":"c00fbda8-925f-4368-8879-f2ddbfc19375","Type":"ContainerDied","Data":"c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1"} Nov 26 09:08:20 crc kubenswrapper[4909]: I1126 09:08:20.256091 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv8w2" event={"ID":"c00fbda8-925f-4368-8879-f2ddbfc19375","Type":"ContainerStarted","Data":"0f127ce07be39a0b0c05b0c55e9304432e68fbe335e6462c73c00f7c89a11333"} Nov 26 09:08:22 crc kubenswrapper[4909]: I1126 09:08:22.283130 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv8w2" event={"ID":"c00fbda8-925f-4368-8879-f2ddbfc19375","Type":"ContainerStarted","Data":"4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1"} Nov 26 09:08:23 crc kubenswrapper[4909]: I1126 09:08:23.295193 4909 generic.go:334] "Generic (PLEG): container finished" podID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerID="4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1" exitCode=0 Nov 26 09:08:23 crc kubenswrapper[4909]: I1126 09:08:23.295296 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv8w2" event={"ID":"c00fbda8-925f-4368-8879-f2ddbfc19375","Type":"ContainerDied","Data":"4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1"} Nov 26 09:08:24 crc kubenswrapper[4909]: I1126 09:08:24.306418 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv8w2" event={"ID":"c00fbda8-925f-4368-8879-f2ddbfc19375","Type":"ContainerStarted","Data":"c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7"} Nov 26 09:08:24 crc kubenswrapper[4909]: I1126 09:08:24.326107 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wv8w2" podStartSLOduration=2.621787334 podStartE2EDuration="6.326088981s" podCreationTimestamp="2025-11-26 09:08:18 +0000 UTC" firstStartedPulling="2025-11-26 09:08:20.259978425 +0000 UTC m=+7672.406189591" lastFinishedPulling="2025-11-26 09:08:23.964280082 +0000 UTC m=+7676.110491238" observedRunningTime="2025-11-26 09:08:24.324374093 +0000 UTC m=+7676.470585269" watchObservedRunningTime="2025-11-26 09:08:24.326088981 +0000 UTC m=+7676.472300147" Nov 26 09:08:28 crc kubenswrapper[4909]: I1126 09:08:28.854744 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:28 crc kubenswrapper[4909]: I1126 09:08:28.855271 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:28 crc kubenswrapper[4909]: I1126 09:08:28.911677 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:29 crc kubenswrapper[4909]: I1126 09:08:29.430230 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:29 crc kubenswrapper[4909]: I1126 09:08:29.485057 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv8w2"] Nov 26 09:08:31 crc kubenswrapper[4909]: I1126 09:08:31.386111 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wv8w2" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="registry-server" containerID="cri-o://c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7" gracePeriod=2 Nov 26 09:08:31 crc kubenswrapper[4909]: I1126 09:08:31.914125 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:31 crc kubenswrapper[4909]: I1126 09:08:31.989671 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j257f\" (UniqueName: \"kubernetes.io/projected/c00fbda8-925f-4368-8879-f2ddbfc19375-kube-api-access-j257f\") pod \"c00fbda8-925f-4368-8879-f2ddbfc19375\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " Nov 26 09:08:31 crc kubenswrapper[4909]: I1126 09:08:31.989899 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-utilities\") pod \"c00fbda8-925f-4368-8879-f2ddbfc19375\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " Nov 26 09:08:31 crc kubenswrapper[4909]: I1126 09:08:31.990063 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-catalog-content\") pod \"c00fbda8-925f-4368-8879-f2ddbfc19375\" (UID: \"c00fbda8-925f-4368-8879-f2ddbfc19375\") " Nov 26 09:08:31 crc kubenswrapper[4909]: I1126 09:08:31.991026 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-utilities" (OuterVolumeSpecName: "utilities") pod "c00fbda8-925f-4368-8879-f2ddbfc19375" (UID: "c00fbda8-925f-4368-8879-f2ddbfc19375"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:08:31 crc kubenswrapper[4909]: I1126 09:08:31.997427 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00fbda8-925f-4368-8879-f2ddbfc19375-kube-api-access-j257f" (OuterVolumeSpecName: "kube-api-access-j257f") pod "c00fbda8-925f-4368-8879-f2ddbfc19375" (UID: "c00fbda8-925f-4368-8879-f2ddbfc19375"). InnerVolumeSpecName "kube-api-access-j257f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.009777 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c00fbda8-925f-4368-8879-f2ddbfc19375" (UID: "c00fbda8-925f-4368-8879-f2ddbfc19375"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.092702 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.092750 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j257f\" (UniqueName: \"kubernetes.io/projected/c00fbda8-925f-4368-8879-f2ddbfc19375-kube-api-access-j257f\") on node \"crc\" DevicePath \"\"" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.092761 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00fbda8-925f-4368-8879-f2ddbfc19375-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.398273 4909 generic.go:334] "Generic (PLEG): container finished" podID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerID="c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7" exitCode=0 Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.398372 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv8w2" event={"ID":"c00fbda8-925f-4368-8879-f2ddbfc19375","Type":"ContainerDied","Data":"c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7"} Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.398711 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv8w2" event={"ID":"c00fbda8-925f-4368-8879-f2ddbfc19375","Type":"ContainerDied","Data":"0f127ce07be39a0b0c05b0c55e9304432e68fbe335e6462c73c00f7c89a11333"} Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.398739 4909 scope.go:117] "RemoveContainer" containerID="c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.398393 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv8w2" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.432664 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv8w2"] Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.433423 4909 scope.go:117] "RemoveContainer" containerID="4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.442229 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv8w2"] Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.455959 4909 scope.go:117] "RemoveContainer" containerID="c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.506450 4909 scope.go:117] "RemoveContainer" containerID="c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7" Nov 26 09:08:32 crc kubenswrapper[4909]: E1126 09:08:32.507108 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7\": container with ID starting with c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7 not found: ID does not exist" containerID="c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.507155 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7"} err="failed to get container status \"c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7\": rpc error: code = NotFound desc = could not find container \"c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7\": container with ID starting with c54d45e389fd22c2eb84489e6527d9d144d4385b85ea85cc7daa1a2552664ee7 not found: ID does not exist" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.507179 4909 scope.go:117] "RemoveContainer" containerID="4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1" Nov 26 09:08:32 crc kubenswrapper[4909]: E1126 09:08:32.508271 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1\": container with ID starting with 4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1 not found: ID does not exist" containerID="4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.508299 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1"} err="failed to get container status \"4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1\": rpc error: code = NotFound desc = could not find container \"4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1\": container with ID starting with 4f0e4ecc8c07b7e48df77234feaf6cfa5a7275f906ca94840528d2963b7b4df1 not found: ID does not exist" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.508313 4909 scope.go:117] "RemoveContainer" containerID="c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1" Nov 26 09:08:32 crc kubenswrapper[4909]: E1126 09:08:32.508700 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1\": container with ID starting with c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1 not found: ID does not exist" containerID="c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.508720 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1"} err="failed to get container status \"c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1\": rpc error: code = NotFound desc = could not find container \"c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1\": container with ID starting with c63ca4686aa85fea1f77d5450a30bc48df9b06ca44b6f15728ce4efb2adc1fc1 not found: ID does not exist" Nov 26 09:08:32 crc kubenswrapper[4909]: I1126 09:08:32.516635 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" path="/var/lib/kubelet/pods/c00fbda8-925f-4368-8879-f2ddbfc19375/volumes" Nov 26 09:08:37 crc kubenswrapper[4909]: I1126 09:08:37.301456 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:08:37 crc kubenswrapper[4909]: I1126 09:08:37.301966 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:09:07 crc kubenswrapper[4909]: I1126 09:09:07.301415 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:09:07 crc kubenswrapper[4909]: I1126 09:09:07.301962 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:09:37 crc kubenswrapper[4909]: I1126 09:09:37.302109 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:09:37 crc kubenswrapper[4909]: I1126 09:09:37.302790 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:09:37 crc kubenswrapper[4909]: I1126 09:09:37.302856 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:09:37 crc kubenswrapper[4909]: I1126 09:09:37.304108 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:09:37 crc kubenswrapper[4909]: I1126 09:09:37.304215 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" gracePeriod=600 Nov 26 09:09:37 crc kubenswrapper[4909]: E1126 09:09:37.455866 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:09:38 crc kubenswrapper[4909]: I1126 09:09:38.143475 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" exitCode=0 Nov 26 09:09:38 crc kubenswrapper[4909]: I1126 09:09:38.143558 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4"} Nov 26 09:09:38 crc kubenswrapper[4909]: I1126 09:09:38.143658 4909 scope.go:117] "RemoveContainer" containerID="9485f7f07079fe5373a349cfe3c947df8d42987ac57bf67685a11bc4c748e707" Nov 26 09:09:38 crc kubenswrapper[4909]: I1126 09:09:38.144941 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:09:38 crc kubenswrapper[4909]: E1126 09:09:38.145627 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:09:50 crc kubenswrapper[4909]: I1126 09:09:50.499723 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:09:50 crc kubenswrapper[4909]: E1126 09:09:50.501244 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.089662 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vnjjt"] Nov 26 09:09:55 crc kubenswrapper[4909]: E1126 09:09:55.091423 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="registry-server" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.091457 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="registry-server" Nov 26 09:09:55 crc kubenswrapper[4909]: E1126 09:09:55.091500 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="extract-utilities" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.091516 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="extract-utilities" Nov 26 09:09:55 crc kubenswrapper[4909]: E1126 09:09:55.091656 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="extract-content" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.091675 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="extract-content" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.092194 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00fbda8-925f-4368-8879-f2ddbfc19375" containerName="registry-server" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.095767 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.102548 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnjjt"] Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.184076 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-utilities\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.184158 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4jl\" (UniqueName: \"kubernetes.io/projected/5c3a6893-aea8-40c0-862e-f02f038e8bf3-kube-api-access-fv4jl\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.184309 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-catalog-content\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.285550 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-catalog-content\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.285683 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-utilities\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.285735 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4jl\" (UniqueName: \"kubernetes.io/projected/5c3a6893-aea8-40c0-862e-f02f038e8bf3-kube-api-access-fv4jl\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.286069 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-catalog-content\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.286283 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-utilities\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.310212 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4jl\" (UniqueName: \"kubernetes.io/projected/5c3a6893-aea8-40c0-862e-f02f038e8bf3-kube-api-access-fv4jl\") pod \"community-operators-vnjjt\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:55 crc kubenswrapper[4909]: I1126 09:09:55.425799 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:09:56 crc kubenswrapper[4909]: I1126 09:09:56.020043 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnjjt"] Nov 26 09:09:56 crc kubenswrapper[4909]: I1126 09:09:56.340331 4909 generic.go:334] "Generic (PLEG): container finished" podID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerID="43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205" exitCode=0 Nov 26 09:09:56 crc kubenswrapper[4909]: I1126 09:09:56.340378 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnjjt" event={"ID":"5c3a6893-aea8-40c0-862e-f02f038e8bf3","Type":"ContainerDied","Data":"43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205"} Nov 26 09:09:56 crc kubenswrapper[4909]: I1126 09:09:56.340408 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnjjt" event={"ID":"5c3a6893-aea8-40c0-862e-f02f038e8bf3","Type":"ContainerStarted","Data":"13d1f665cdf0be54d85fc61e488943cba38f0a6299b41ac3fa65c0bece53f0f2"} Nov 26 09:09:58 crc kubenswrapper[4909]: I1126 09:09:58.359356 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnjjt" event={"ID":"5c3a6893-aea8-40c0-862e-f02f038e8bf3","Type":"ContainerStarted","Data":"c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e"} Nov 26 09:09:59 crc kubenswrapper[4909]: I1126 09:09:59.369459 4909 generic.go:334] "Generic (PLEG): container finished" podID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerID="c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e" exitCode=0 Nov 26 09:09:59 crc kubenswrapper[4909]: I1126 09:09:59.369782 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnjjt" event={"ID":"5c3a6893-aea8-40c0-862e-f02f038e8bf3","Type":"ContainerDied","Data":"c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e"} Nov 26 09:10:00 crc kubenswrapper[4909]: I1126 09:10:00.384646 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnjjt" event={"ID":"5c3a6893-aea8-40c0-862e-f02f038e8bf3","Type":"ContainerStarted","Data":"f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3"} Nov 26 09:10:00 crc kubenswrapper[4909]: I1126 09:10:00.415760 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vnjjt" podStartSLOduration=1.888683764 podStartE2EDuration="5.415743476s" podCreationTimestamp="2025-11-26 09:09:55 +0000 UTC" firstStartedPulling="2025-11-26 09:09:56.342861877 +0000 UTC m=+7768.489073043" lastFinishedPulling="2025-11-26 09:09:59.869921589 +0000 UTC m=+7772.016132755" observedRunningTime="2025-11-26 09:10:00.413804073 +0000 UTC m=+7772.560015239" watchObservedRunningTime="2025-11-26 09:10:00.415743476 +0000 UTC m=+7772.561954642" Nov 26 09:10:03 crc kubenswrapper[4909]: I1126 09:10:03.498952 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:10:03 crc kubenswrapper[4909]: E1126 09:10:03.499905 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:10:05 crc kubenswrapper[4909]: I1126 09:10:05.426688 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:10:05 crc kubenswrapper[4909]: I1126 09:10:05.426902 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:10:05 crc kubenswrapper[4909]: I1126 09:10:05.480510 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:10:05 crc kubenswrapper[4909]: I1126 09:10:05.536251 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:10:05 crc kubenswrapper[4909]: I1126 09:10:05.718516 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnjjt"] Nov 26 09:10:07 crc kubenswrapper[4909]: I1126 09:10:07.458886 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vnjjt" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="registry-server" containerID="cri-o://f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3" gracePeriod=2 Nov 26 09:10:07 crc kubenswrapper[4909]: I1126 09:10:07.951498 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.068322 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv4jl\" (UniqueName: \"kubernetes.io/projected/5c3a6893-aea8-40c0-862e-f02f038e8bf3-kube-api-access-fv4jl\") pod \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.068431 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-utilities\") pod \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.068655 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-catalog-content\") pod \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\" (UID: \"5c3a6893-aea8-40c0-862e-f02f038e8bf3\") " Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.069667 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-utilities" (OuterVolumeSpecName: "utilities") pod "5c3a6893-aea8-40c0-862e-f02f038e8bf3" (UID: "5c3a6893-aea8-40c0-862e-f02f038e8bf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.074862 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c3a6893-aea8-40c0-862e-f02f038e8bf3-kube-api-access-fv4jl" (OuterVolumeSpecName: "kube-api-access-fv4jl") pod "5c3a6893-aea8-40c0-862e-f02f038e8bf3" (UID: "5c3a6893-aea8-40c0-862e-f02f038e8bf3"). InnerVolumeSpecName "kube-api-access-fv4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.132505 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c3a6893-aea8-40c0-862e-f02f038e8bf3" (UID: "5c3a6893-aea8-40c0-862e-f02f038e8bf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.171393 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.171423 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv4jl\" (UniqueName: \"kubernetes.io/projected/5c3a6893-aea8-40c0-862e-f02f038e8bf3-kube-api-access-fv4jl\") on node \"crc\" DevicePath \"\"" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.171434 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c3a6893-aea8-40c0-862e-f02f038e8bf3-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.471172 4909 generic.go:334] "Generic (PLEG): container finished" podID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerID="f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3" exitCode=0 Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.471214 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnjjt" event={"ID":"5c3a6893-aea8-40c0-862e-f02f038e8bf3","Type":"ContainerDied","Data":"f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3"} Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.471248 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnjjt" event={"ID":"5c3a6893-aea8-40c0-862e-f02f038e8bf3","Type":"ContainerDied","Data":"13d1f665cdf0be54d85fc61e488943cba38f0a6299b41ac3fa65c0bece53f0f2"} Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.471266 4909 scope.go:117] "RemoveContainer" containerID="f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.471270 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnjjt" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.499081 4909 scope.go:117] "RemoveContainer" containerID="c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.535660 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnjjt"] Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.542160 4909 scope.go:117] "RemoveContainer" containerID="43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.543896 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vnjjt"] Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.585133 4909 scope.go:117] "RemoveContainer" containerID="f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3" Nov 26 09:10:08 crc kubenswrapper[4909]: E1126 09:10:08.585452 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3\": container with ID starting with f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3 not found: ID does not exist" containerID="f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.585488 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3"} err="failed to get container status \"f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3\": rpc error: code = NotFound desc = could not find container \"f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3\": container with ID starting with f0a192f63f951547a6132a5460c55edc410e051f6b759a83f6c96b2e3bc564e3 not found: ID does not exist" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.585511 4909 scope.go:117] "RemoveContainer" containerID="c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e" Nov 26 09:10:08 crc kubenswrapper[4909]: E1126 09:10:08.586099 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e\": container with ID starting with c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e not found: ID does not exist" containerID="c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.586132 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e"} err="failed to get container status \"c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e\": rpc error: code = NotFound desc = could not find container \"c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e\": container with ID starting with c0252801ff5daad2c1d4b28d1daaa9078ceaf660f82a3717e231d4b63336711e not found: ID does not exist" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.586156 4909 scope.go:117] "RemoveContainer" containerID="43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205" Nov 26 09:10:08 crc kubenswrapper[4909]: E1126 09:10:08.586707 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205\": container with ID starting with 43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205 not found: ID does not exist" containerID="43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205" Nov 26 09:10:08 crc kubenswrapper[4909]: I1126 09:10:08.586736 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205"} err="failed to get container status \"43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205\": rpc error: code = NotFound desc = could not find container \"43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205\": container with ID starting with 43c88d5a09c77dbaf6ceb5f467d9e9fb9f110bceaf1720002da6b10d121ce205 not found: ID does not exist" Nov 26 09:10:10 crc kubenswrapper[4909]: I1126 09:10:10.517518 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" path="/var/lib/kubelet/pods/5c3a6893-aea8-40c0-862e-f02f038e8bf3/volumes" Nov 26 09:10:14 crc kubenswrapper[4909]: I1126 09:10:14.498563 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:10:14 crc kubenswrapper[4909]: E1126 09:10:14.499432 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:10:29 crc kubenswrapper[4909]: I1126 09:10:29.498965 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:10:29 crc kubenswrapper[4909]: E1126 09:10:29.500716 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:10:42 crc kubenswrapper[4909]: I1126 09:10:42.499053 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:10:42 crc kubenswrapper[4909]: E1126 09:10:42.499851 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:10:55 crc kubenswrapper[4909]: I1126 09:10:55.498819 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:10:55 crc kubenswrapper[4909]: E1126 09:10:55.499468 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:11:09 crc kubenswrapper[4909]: I1126 09:11:09.498613 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:11:09 crc kubenswrapper[4909]: E1126 09:11:09.499487 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:11:23 crc kubenswrapper[4909]: I1126 09:11:23.499809 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:11:23 crc kubenswrapper[4909]: E1126 09:11:23.500879 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:11:31 crc kubenswrapper[4909]: I1126 09:11:31.432311 4909 generic.go:334] "Generic (PLEG): container finished" podID="c12a232c-8572-40da-bd58-1f46eab0d5b4" containerID="cf66d26f7c71e22ebb2eed14c084bc3ff0bac16bcece82adc6763b1edcd6e440" exitCode=0 Nov 26 09:11:31 crc kubenswrapper[4909]: I1126 09:11:31.432527 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" event={"ID":"c12a232c-8572-40da-bd58-1f46eab0d5b4","Type":"ContainerDied","Data":"cf66d26f7c71e22ebb2eed14c084bc3ff0bac16bcece82adc6763b1edcd6e440"} Nov 26 09:11:32 crc kubenswrapper[4909]: I1126 09:11:32.959870 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.117500 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-combined-ca-bundle\") pod \"c12a232c-8572-40da-bd58-1f46eab0d5b4\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.117685 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svngj\" (UniqueName: \"kubernetes.io/projected/c12a232c-8572-40da-bd58-1f46eab0d5b4-kube-api-access-svngj\") pod \"c12a232c-8572-40da-bd58-1f46eab0d5b4\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.117745 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-inventory\") pod \"c12a232c-8572-40da-bd58-1f46eab0d5b4\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.118703 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ssh-key\") pod \"c12a232c-8572-40da-bd58-1f46eab0d5b4\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.118756 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ceph\") pod \"c12a232c-8572-40da-bd58-1f46eab0d5b4\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.118829 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-secret-0\") pod \"c12a232c-8572-40da-bd58-1f46eab0d5b4\" (UID: \"c12a232c-8572-40da-bd58-1f46eab0d5b4\") " Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.123856 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12a232c-8572-40da-bd58-1f46eab0d5b4-kube-api-access-svngj" (OuterVolumeSpecName: "kube-api-access-svngj") pod "c12a232c-8572-40da-bd58-1f46eab0d5b4" (UID: "c12a232c-8572-40da-bd58-1f46eab0d5b4"). InnerVolumeSpecName "kube-api-access-svngj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.123947 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c12a232c-8572-40da-bd58-1f46eab0d5b4" (UID: "c12a232c-8572-40da-bd58-1f46eab0d5b4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.125766 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ceph" (OuterVolumeSpecName: "ceph") pod "c12a232c-8572-40da-bd58-1f46eab0d5b4" (UID: "c12a232c-8572-40da-bd58-1f46eab0d5b4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.160180 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-inventory" (OuterVolumeSpecName: "inventory") pod "c12a232c-8572-40da-bd58-1f46eab0d5b4" (UID: "c12a232c-8572-40da-bd58-1f46eab0d5b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.163930 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c12a232c-8572-40da-bd58-1f46eab0d5b4" (UID: "c12a232c-8572-40da-bd58-1f46eab0d5b4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.166758 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c12a232c-8572-40da-bd58-1f46eab0d5b4" (UID: "c12a232c-8572-40da-bd58-1f46eab0d5b4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.221579 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.221639 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.221659 4909 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.221676 4909 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.221693 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svngj\" (UniqueName: \"kubernetes.io/projected/c12a232c-8572-40da-bd58-1f46eab0d5b4-kube-api-access-svngj\") on node \"crc\" DevicePath \"\"" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.221709 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c12a232c-8572-40da-bd58-1f46eab0d5b4-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.460048 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" event={"ID":"c12a232c-8572-40da-bd58-1f46eab0d5b4","Type":"ContainerDied","Data":"a412bd0b36bdc1b0d14a1ff8a119fd2f18bc1828a04e423bf52bc3745d61e79d"} Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.460355 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a412bd0b36bdc1b0d14a1ff8a119fd2f18bc1828a04e423bf52bc3745d61e79d" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.460147 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7zc8t" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.598834 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-dmw74"] Nov 26 09:11:33 crc kubenswrapper[4909]: E1126 09:11:33.599272 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="registry-server" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.599289 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="registry-server" Nov 26 09:11:33 crc kubenswrapper[4909]: E1126 09:11:33.599314 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12a232c-8572-40da-bd58-1f46eab0d5b4" containerName="libvirt-openstack-openstack-cell1" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.599320 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12a232c-8572-40da-bd58-1f46eab0d5b4" containerName="libvirt-openstack-openstack-cell1" Nov 26 09:11:33 crc kubenswrapper[4909]: E1126 09:11:33.599340 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="extract-content" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.599346 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="extract-content" Nov 26 09:11:33 crc kubenswrapper[4909]: E1126 09:11:33.599360 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="extract-utilities" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.599366 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="extract-utilities" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.599565 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12a232c-8572-40da-bd58-1f46eab0d5b4" containerName="libvirt-openstack-openstack-cell1" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.599583 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3a6893-aea8-40c0-862e-f02f038e8bf3" containerName="registry-server" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.600350 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.603705 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.603834 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.605193 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.605332 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.605389 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.605539 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.607913 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.617297 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-dmw74"] Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.633742 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.633848 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.633893 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634060 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634112 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634153 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634172 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634220 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ceph\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634272 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634432 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zllrt\" (UniqueName: \"kubernetes.io/projected/e523aac5-088b-427f-890e-90ad45a407f6-kube-api-access-zllrt\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.634490 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-inventory\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.736879 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.736933 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.736954 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.736985 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ceph\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737020 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737096 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zllrt\" (UniqueName: \"kubernetes.io/projected/e523aac5-088b-427f-890e-90ad45a407f6-kube-api-access-zllrt\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737132 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-inventory\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737166 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737207 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737233 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737270 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.737910 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.738030 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.742444 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ceph\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.742875 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-inventory\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.743396 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.743731 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.744163 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.747292 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.747341 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.747398 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.758353 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zllrt\" (UniqueName: \"kubernetes.io/projected/e523aac5-088b-427f-890e-90ad45a407f6-kube-api-access-zllrt\") pod \"nova-cell1-openstack-openstack-cell1-dmw74\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:33 crc kubenswrapper[4909]: I1126 09:11:33.918908 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:11:34 crc kubenswrapper[4909]: I1126 09:11:34.524730 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-dmw74"] Nov 26 09:11:35 crc kubenswrapper[4909]: I1126 09:11:35.490110 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" event={"ID":"e523aac5-088b-427f-890e-90ad45a407f6","Type":"ContainerStarted","Data":"94e6a8513534af684d49f5f07ec360227e37c36f3c8ebff275e2c0df6cf6957b"} Nov 26 09:11:35 crc kubenswrapper[4909]: I1126 09:11:35.490645 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" event={"ID":"e523aac5-088b-427f-890e-90ad45a407f6","Type":"ContainerStarted","Data":"8b190bc1a143706a7fecb55f62f4256a4979173ef7fc36cf603499bdb1b2e940"} Nov 26 09:11:35 crc kubenswrapper[4909]: I1126 09:11:35.516460 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" podStartSLOduration=2.062517995 podStartE2EDuration="2.516440376s" podCreationTimestamp="2025-11-26 09:11:33 +0000 UTC" firstStartedPulling="2025-11-26 09:11:34.542492732 +0000 UTC m=+7866.688703908" lastFinishedPulling="2025-11-26 09:11:34.996415113 +0000 UTC m=+7867.142626289" observedRunningTime="2025-11-26 09:11:35.507163844 +0000 UTC m=+7867.653375030" watchObservedRunningTime="2025-11-26 09:11:35.516440376 +0000 UTC m=+7867.662651552" Nov 26 09:11:37 crc kubenswrapper[4909]: I1126 09:11:37.499311 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:11:37 crc kubenswrapper[4909]: E1126 09:11:37.500102 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.491399 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjwvv"] Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.495480 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.517505 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjwvv"] Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.620823 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lfmh\" (UniqueName: \"kubernetes.io/projected/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-kube-api-access-8lfmh\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.621070 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-utilities\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.621237 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-catalog-content\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.724112 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lfmh\" (UniqueName: \"kubernetes.io/projected/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-kube-api-access-8lfmh\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.724162 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-utilities\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.724238 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-catalog-content\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.724717 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-catalog-content\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.724865 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-utilities\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.748874 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lfmh\" (UniqueName: \"kubernetes.io/projected/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-kube-api-access-8lfmh\") pod \"certified-operators-cjwvv\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:46 crc kubenswrapper[4909]: I1126 09:11:46.835107 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:47 crc kubenswrapper[4909]: I1126 09:11:47.342584 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjwvv"] Nov 26 09:11:47 crc kubenswrapper[4909]: I1126 09:11:47.627003 4909 generic.go:334] "Generic (PLEG): container finished" podID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerID="bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52" exitCode=0 Nov 26 09:11:47 crc kubenswrapper[4909]: I1126 09:11:47.627134 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjwvv" event={"ID":"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15","Type":"ContainerDied","Data":"bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52"} Nov 26 09:11:47 crc kubenswrapper[4909]: I1126 09:11:47.627208 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjwvv" event={"ID":"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15","Type":"ContainerStarted","Data":"3a246dcce15add0e10e0ffa3ae6fe24a6fb353974619b9dfc4efa8e343ac7719"} Nov 26 09:11:47 crc kubenswrapper[4909]: I1126 09:11:47.628924 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:11:48 crc kubenswrapper[4909]: I1126 09:11:48.640224 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjwvv" event={"ID":"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15","Type":"ContainerStarted","Data":"52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800"} Nov 26 09:11:49 crc kubenswrapper[4909]: I1126 09:11:49.507552 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:11:49 crc kubenswrapper[4909]: E1126 09:11:49.508193 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:11:50 crc kubenswrapper[4909]: I1126 09:11:50.658300 4909 generic.go:334] "Generic (PLEG): container finished" podID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerID="52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800" exitCode=0 Nov 26 09:11:50 crc kubenswrapper[4909]: I1126 09:11:50.658372 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjwvv" event={"ID":"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15","Type":"ContainerDied","Data":"52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800"} Nov 26 09:11:52 crc kubenswrapper[4909]: I1126 09:11:52.689957 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjwvv" event={"ID":"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15","Type":"ContainerStarted","Data":"1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb"} Nov 26 09:11:52 crc kubenswrapper[4909]: I1126 09:11:52.720376 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjwvv" podStartSLOduration=2.850315183 podStartE2EDuration="6.720358829s" podCreationTimestamp="2025-11-26 09:11:46 +0000 UTC" firstStartedPulling="2025-11-26 09:11:47.628708443 +0000 UTC m=+7879.774919609" lastFinishedPulling="2025-11-26 09:11:51.498752069 +0000 UTC m=+7883.644963255" observedRunningTime="2025-11-26 09:11:52.719272469 +0000 UTC m=+7884.865483635" watchObservedRunningTime="2025-11-26 09:11:52.720358829 +0000 UTC m=+7884.866570015" Nov 26 09:11:56 crc kubenswrapper[4909]: I1126 09:11:56.836317 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:56 crc kubenswrapper[4909]: I1126 09:11:56.836948 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:56 crc kubenswrapper[4909]: I1126 09:11:56.889092 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:57 crc kubenswrapper[4909]: I1126 09:11:57.819197 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:11:57 crc kubenswrapper[4909]: I1126 09:11:57.893006 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjwvv"] Nov 26 09:11:59 crc kubenswrapper[4909]: I1126 09:11:59.773062 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjwvv" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="registry-server" containerID="cri-o://1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb" gracePeriod=2 Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.318058 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.424977 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-catalog-content\") pod \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.425097 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-utilities\") pod \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.425358 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lfmh\" (UniqueName: \"kubernetes.io/projected/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-kube-api-access-8lfmh\") pod \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\" (UID: \"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15\") " Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.426069 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-utilities" (OuterVolumeSpecName: "utilities") pod "33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" (UID: "33aebeaf-3483-4a14-8c5e-d75b1b4e5b15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.430817 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-kube-api-access-8lfmh" (OuterVolumeSpecName: "kube-api-access-8lfmh") pod "33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" (UID: "33aebeaf-3483-4a14-8c5e-d75b1b4e5b15"). InnerVolumeSpecName "kube-api-access-8lfmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.491797 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" (UID: "33aebeaf-3483-4a14-8c5e-d75b1b4e5b15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.527791 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lfmh\" (UniqueName: \"kubernetes.io/projected/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-kube-api-access-8lfmh\") on node \"crc\" DevicePath \"\"" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.527831 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.527843 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.795938 4909 generic.go:334] "Generic (PLEG): container finished" podID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerID="1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb" exitCode=0 Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.796012 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjwvv" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.795988 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjwvv" event={"ID":"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15","Type":"ContainerDied","Data":"1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb"} Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.796091 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjwvv" event={"ID":"33aebeaf-3483-4a14-8c5e-d75b1b4e5b15","Type":"ContainerDied","Data":"3a246dcce15add0e10e0ffa3ae6fe24a6fb353974619b9dfc4efa8e343ac7719"} Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.796148 4909 scope.go:117] "RemoveContainer" containerID="1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.833727 4909 scope.go:117] "RemoveContainer" containerID="52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.835306 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjwvv"] Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.846512 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjwvv"] Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.861980 4909 scope.go:117] "RemoveContainer" containerID="bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.915119 4909 scope.go:117] "RemoveContainer" containerID="1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb" Nov 26 09:12:00 crc kubenswrapper[4909]: E1126 09:12:00.915661 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb\": container with ID starting with 1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb not found: ID does not exist" containerID="1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.915713 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb"} err="failed to get container status \"1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb\": rpc error: code = NotFound desc = could not find container \"1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb\": container with ID starting with 1ea7e7f134fdfaf60dc7fc883d0ea9b3a7c6921b0a946102aead4576b9172efb not found: ID does not exist" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.915749 4909 scope.go:117] "RemoveContainer" containerID="52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800" Nov 26 09:12:00 crc kubenswrapper[4909]: E1126 09:12:00.916322 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800\": container with ID starting with 52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800 not found: ID does not exist" containerID="52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.916365 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800"} err="failed to get container status \"52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800\": rpc error: code = NotFound desc = could not find container \"52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800\": container with ID starting with 52ea6eda8491045199f42db5790f2181132428e0a20fd8da35da1852925d1800 not found: ID does not exist" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.916382 4909 scope.go:117] "RemoveContainer" containerID="bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52" Nov 26 09:12:00 crc kubenswrapper[4909]: E1126 09:12:00.916766 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52\": container with ID starting with bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52 not found: ID does not exist" containerID="bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52" Nov 26 09:12:00 crc kubenswrapper[4909]: I1126 09:12:00.916799 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52"} err="failed to get container status \"bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52\": rpc error: code = NotFound desc = could not find container \"bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52\": container with ID starting with bca11beadf5206b4b6a623242ab4bed8b9d9199b26a350a7bed081b95361ce52 not found: ID does not exist" Nov 26 09:12:02 crc kubenswrapper[4909]: I1126 09:12:02.513746 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" path="/var/lib/kubelet/pods/33aebeaf-3483-4a14-8c5e-d75b1b4e5b15/volumes" Nov 26 09:12:03 crc kubenswrapper[4909]: I1126 09:12:03.500307 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:12:03 crc kubenswrapper[4909]: E1126 09:12:03.500657 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:12:15 crc kubenswrapper[4909]: I1126 09:12:15.498759 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:12:15 crc kubenswrapper[4909]: E1126 09:12:15.499575 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:12:27 crc kubenswrapper[4909]: I1126 09:12:27.500102 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:12:27 crc kubenswrapper[4909]: E1126 09:12:27.501051 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:12:39 crc kubenswrapper[4909]: I1126 09:12:39.499554 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:12:39 crc kubenswrapper[4909]: E1126 09:12:39.500191 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:12:51 crc kubenswrapper[4909]: I1126 09:12:51.499876 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:12:51 crc kubenswrapper[4909]: E1126 09:12:51.501298 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:13:02 crc kubenswrapper[4909]: I1126 09:13:02.498935 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:13:02 crc kubenswrapper[4909]: E1126 09:13:02.499802 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:13:13 crc kubenswrapper[4909]: I1126 09:13:13.499754 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:13:13 crc kubenswrapper[4909]: E1126 09:13:13.501434 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:13:25 crc kubenswrapper[4909]: I1126 09:13:25.498294 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:13:25 crc kubenswrapper[4909]: E1126 09:13:25.498966 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.487737 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjvzt"] Nov 26 09:13:30 crc kubenswrapper[4909]: E1126 09:13:30.501976 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="extract-utilities" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.502009 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="extract-utilities" Nov 26 09:13:30 crc kubenswrapper[4909]: E1126 09:13:30.502043 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="registry-server" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.502051 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="registry-server" Nov 26 09:13:30 crc kubenswrapper[4909]: E1126 09:13:30.502096 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="extract-content" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.502104 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="extract-content" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.502483 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="33aebeaf-3483-4a14-8c5e-d75b1b4e5b15" containerName="registry-server" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.506677 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.521828 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjvzt"] Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.573624 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4x75\" (UniqueName: \"kubernetes.io/projected/bb55876d-4988-4104-aba1-28fcbb775359-kube-api-access-p4x75\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.573721 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-utilities\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.573749 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-catalog-content\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.675612 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-utilities\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.675657 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-catalog-content\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.675852 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4x75\" (UniqueName: \"kubernetes.io/projected/bb55876d-4988-4104-aba1-28fcbb775359-kube-api-access-p4x75\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.676187 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-utilities\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.676277 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-catalog-content\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.704664 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4x75\" (UniqueName: \"kubernetes.io/projected/bb55876d-4988-4104-aba1-28fcbb775359-kube-api-access-p4x75\") pod \"redhat-operators-rjvzt\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:30 crc kubenswrapper[4909]: I1126 09:13:30.884994 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:31 crc kubenswrapper[4909]: I1126 09:13:31.416016 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjvzt"] Nov 26 09:13:31 crc kubenswrapper[4909]: I1126 09:13:31.792578 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerStarted","Data":"d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c"} Nov 26 09:13:31 crc kubenswrapper[4909]: I1126 09:13:31.792641 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerStarted","Data":"81899b200fb69fd51e2a84b465c005d626c1917566fe86d9548262b73b1b99e3"} Nov 26 09:13:32 crc kubenswrapper[4909]: I1126 09:13:32.811974 4909 generic.go:334] "Generic (PLEG): container finished" podID="bb55876d-4988-4104-aba1-28fcbb775359" containerID="d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c" exitCode=0 Nov 26 09:13:32 crc kubenswrapper[4909]: I1126 09:13:32.812150 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerDied","Data":"d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c"} Nov 26 09:13:34 crc kubenswrapper[4909]: I1126 09:13:34.833627 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerStarted","Data":"862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6"} Nov 26 09:13:39 crc kubenswrapper[4909]: I1126 09:13:39.500094 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:13:39 crc kubenswrapper[4909]: E1126 09:13:39.501909 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:13:42 crc kubenswrapper[4909]: I1126 09:13:42.969499 4909 generic.go:334] "Generic (PLEG): container finished" podID="bb55876d-4988-4104-aba1-28fcbb775359" containerID="862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6" exitCode=0 Nov 26 09:13:42 crc kubenswrapper[4909]: I1126 09:13:42.969584 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerDied","Data":"862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6"} Nov 26 09:13:43 crc kubenswrapper[4909]: I1126 09:13:43.984787 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerStarted","Data":"af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1"} Nov 26 09:13:44 crc kubenswrapper[4909]: I1126 09:13:44.008960 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjvzt" podStartSLOduration=3.29551996 podStartE2EDuration="14.008944382s" podCreationTimestamp="2025-11-26 09:13:30 +0000 UTC" firstStartedPulling="2025-11-26 09:13:32.814478759 +0000 UTC m=+7984.960689925" lastFinishedPulling="2025-11-26 09:13:43.527903181 +0000 UTC m=+7995.674114347" observedRunningTime="2025-11-26 09:13:44.002383003 +0000 UTC m=+7996.148594159" watchObservedRunningTime="2025-11-26 09:13:44.008944382 +0000 UTC m=+7996.155155548" Nov 26 09:13:50 crc kubenswrapper[4909]: I1126 09:13:50.885339 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:50 crc kubenswrapper[4909]: I1126 09:13:50.887000 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:50 crc kubenswrapper[4909]: I1126 09:13:50.947460 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:51 crc kubenswrapper[4909]: I1126 09:13:51.105192 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:51 crc kubenswrapper[4909]: I1126 09:13:51.192067 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjvzt"] Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.071238 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjvzt" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="registry-server" containerID="cri-o://af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1" gracePeriod=2 Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.588343 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.723583 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-catalog-content\") pod \"bb55876d-4988-4104-aba1-28fcbb775359\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.723658 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4x75\" (UniqueName: \"kubernetes.io/projected/bb55876d-4988-4104-aba1-28fcbb775359-kube-api-access-p4x75\") pod \"bb55876d-4988-4104-aba1-28fcbb775359\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.723832 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-utilities\") pod \"bb55876d-4988-4104-aba1-28fcbb775359\" (UID: \"bb55876d-4988-4104-aba1-28fcbb775359\") " Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.725200 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-utilities" (OuterVolumeSpecName: "utilities") pod "bb55876d-4988-4104-aba1-28fcbb775359" (UID: "bb55876d-4988-4104-aba1-28fcbb775359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.733793 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb55876d-4988-4104-aba1-28fcbb775359-kube-api-access-p4x75" (OuterVolumeSpecName: "kube-api-access-p4x75") pod "bb55876d-4988-4104-aba1-28fcbb775359" (UID: "bb55876d-4988-4104-aba1-28fcbb775359"). InnerVolumeSpecName "kube-api-access-p4x75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.826215 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.826251 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4x75\" (UniqueName: \"kubernetes.io/projected/bb55876d-4988-4104-aba1-28fcbb775359-kube-api-access-p4x75\") on node \"crc\" DevicePath \"\"" Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.834988 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb55876d-4988-4104-aba1-28fcbb775359" (UID: "bb55876d-4988-4104-aba1-28fcbb775359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:13:53 crc kubenswrapper[4909]: I1126 09:13:53.928362 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb55876d-4988-4104-aba1-28fcbb775359-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.085678 4909 generic.go:334] "Generic (PLEG): container finished" podID="bb55876d-4988-4104-aba1-28fcbb775359" containerID="af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1" exitCode=0 Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.085734 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerDied","Data":"af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1"} Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.085765 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjvzt" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.085776 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjvzt" event={"ID":"bb55876d-4988-4104-aba1-28fcbb775359","Type":"ContainerDied","Data":"81899b200fb69fd51e2a84b465c005d626c1917566fe86d9548262b73b1b99e3"} Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.085798 4909 scope.go:117] "RemoveContainer" containerID="af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.116966 4909 scope.go:117] "RemoveContainer" containerID="862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.138949 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjvzt"] Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.148711 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjvzt"] Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.165669 4909 scope.go:117] "RemoveContainer" containerID="d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.212705 4909 scope.go:117] "RemoveContainer" containerID="af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1" Nov 26 09:13:54 crc kubenswrapper[4909]: E1126 09:13:54.213186 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1\": container with ID starting with af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1 not found: ID does not exist" containerID="af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.213217 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1"} err="failed to get container status \"af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1\": rpc error: code = NotFound desc = could not find container \"af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1\": container with ID starting with af9dff616485ac41d4390568632e1d9c963bc334eb8b691cec64c7e05f034eb1 not found: ID does not exist" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.213237 4909 scope.go:117] "RemoveContainer" containerID="862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6" Nov 26 09:13:54 crc kubenswrapper[4909]: E1126 09:13:54.213514 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6\": container with ID starting with 862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6 not found: ID does not exist" containerID="862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.213534 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6"} err="failed to get container status \"862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6\": rpc error: code = NotFound desc = could not find container \"862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6\": container with ID starting with 862ed4f61253481099877a4b0a52f20a32f054e3844b209eed4ca58f34b124a6 not found: ID does not exist" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.213546 4909 scope.go:117] "RemoveContainer" containerID="d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c" Nov 26 09:13:54 crc kubenswrapper[4909]: E1126 09:13:54.213804 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c\": container with ID starting with d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c not found: ID does not exist" containerID="d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.213824 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c"} err="failed to get container status \"d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c\": rpc error: code = NotFound desc = could not find container \"d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c\": container with ID starting with d87934f6b817ab606d61db34143b3e4c30899ae7cb25347f7b7c56cec8a7cc8c not found: ID does not exist" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.499693 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:13:54 crc kubenswrapper[4909]: E1126 09:13:54.500186 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:13:54 crc kubenswrapper[4909]: I1126 09:13:54.512689 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb55876d-4988-4104-aba1-28fcbb775359" path="/var/lib/kubelet/pods/bb55876d-4988-4104-aba1-28fcbb775359/volumes" Nov 26 09:14:05 crc kubenswrapper[4909]: I1126 09:14:05.499467 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:14:05 crc kubenswrapper[4909]: E1126 09:14:05.500314 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:14:18 crc kubenswrapper[4909]: I1126 09:14:18.506570 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:14:18 crc kubenswrapper[4909]: E1126 09:14:18.507360 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:14:30 crc kubenswrapper[4909]: I1126 09:14:30.506467 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:14:30 crc kubenswrapper[4909]: E1126 09:14:30.507554 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:14:43 crc kubenswrapper[4909]: I1126 09:14:43.499924 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:14:44 crc kubenswrapper[4909]: I1126 09:14:44.687777 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"dc39db933dd67d1f3d921b87fa72fde5f9f5072131e6833356ff3d7b9e58c919"} Nov 26 09:14:49 crc kubenswrapper[4909]: I1126 09:14:49.746702 4909 generic.go:334] "Generic (PLEG): container finished" podID="e523aac5-088b-427f-890e-90ad45a407f6" containerID="94e6a8513534af684d49f5f07ec360227e37c36f3c8ebff275e2c0df6cf6957b" exitCode=0 Nov 26 09:14:49 crc kubenswrapper[4909]: I1126 09:14:49.746798 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" event={"ID":"e523aac5-088b-427f-890e-90ad45a407f6","Type":"ContainerDied","Data":"94e6a8513534af684d49f5f07ec360227e37c36f3c8ebff275e2c0df6cf6957b"} Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.244048 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.372337 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ceph\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.372747 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-inventory\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.372790 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ssh-key\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.372833 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-1\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.372886 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-0\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.372927 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-1\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.373008 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zllrt\" (UniqueName: \"kubernetes.io/projected/e523aac5-088b-427f-890e-90ad45a407f6-kube-api-access-zllrt\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.373038 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-1\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.373116 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-0\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.373144 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-0\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.373252 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-combined-ca-bundle\") pod \"e523aac5-088b-427f-890e-90ad45a407f6\" (UID: \"e523aac5-088b-427f-890e-90ad45a407f6\") " Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.377765 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ceph" (OuterVolumeSpecName: "ceph") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.391893 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.393289 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e523aac5-088b-427f-890e-90ad45a407f6-kube-api-access-zllrt" (OuterVolumeSpecName: "kube-api-access-zllrt") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "kube-api-access-zllrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.401833 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.405922 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.408024 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.411582 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.413710 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.413778 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.415307 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-inventory" (OuterVolumeSpecName: "inventory") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.427022 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e523aac5-088b-427f-890e-90ad45a407f6" (UID: "e523aac5-088b-427f-890e-90ad45a407f6"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476220 4909 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476250 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476260 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476272 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476281 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476292 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476301 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/e523aac5-088b-427f-890e-90ad45a407f6-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476309 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476318 4909 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476326 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zllrt\" (UniqueName: \"kubernetes.io/projected/e523aac5-088b-427f-890e-90ad45a407f6-kube-api-access-zllrt\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.476335 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e523aac5-088b-427f-890e-90ad45a407f6-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.776905 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" event={"ID":"e523aac5-088b-427f-890e-90ad45a407f6","Type":"ContainerDied","Data":"8b190bc1a143706a7fecb55f62f4256a4979173ef7fc36cf603499bdb1b2e940"} Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.776954 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b190bc1a143706a7fecb55f62f4256a4979173ef7fc36cf603499bdb1b2e940" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.777019 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dmw74" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.886415 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-d5fkl"] Nov 26 09:14:51 crc kubenswrapper[4909]: E1126 09:14:51.886957 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="registry-server" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.886979 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="registry-server" Nov 26 09:14:51 crc kubenswrapper[4909]: E1126 09:14:51.887032 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="extract-utilities" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.887056 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="extract-utilities" Nov 26 09:14:51 crc kubenswrapper[4909]: E1126 09:14:51.887077 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="extract-content" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.887086 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="extract-content" Nov 26 09:14:51 crc kubenswrapper[4909]: E1126 09:14:51.887131 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e523aac5-088b-427f-890e-90ad45a407f6" containerName="nova-cell1-openstack-openstack-cell1" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.887144 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e523aac5-088b-427f-890e-90ad45a407f6" containerName="nova-cell1-openstack-openstack-cell1" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.887547 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb55876d-4988-4104-aba1-28fcbb775359" containerName="registry-server" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.887585 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e523aac5-088b-427f-890e-90ad45a407f6" containerName="nova-cell1-openstack-openstack-cell1" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.888795 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.891185 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.891309 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.891185 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.891942 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.892157 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.899192 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-d5fkl"] Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.985711 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceph\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.985819 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.985862 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.985899 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.985927 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6z74\" (UniqueName: \"kubernetes.io/projected/03a665a8-d345-4f57-b8fd-5d22c4d3804b-kube-api-access-n6z74\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.986201 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ssh-key\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.986384 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-inventory\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:51 crc kubenswrapper[4909]: I1126 09:14:51.986565 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.088875 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.089307 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.089365 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.089406 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6z74\" (UniqueName: \"kubernetes.io/projected/03a665a8-d345-4f57-b8fd-5d22c4d3804b-kube-api-access-n6z74\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.089504 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ssh-key\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.089570 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-inventory\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.089669 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.089711 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceph\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.093405 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.094065 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.094154 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-inventory\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.094322 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceph\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.094388 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.094515 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.096253 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ssh-key\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.135171 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6z74\" (UniqueName: \"kubernetes.io/projected/03a665a8-d345-4f57-b8fd-5d22c4d3804b-kube-api-access-n6z74\") pod \"telemetry-openstack-openstack-cell1-d5fkl\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.210395 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:14:52 crc kubenswrapper[4909]: I1126 09:14:52.801013 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-d5fkl"] Nov 26 09:14:52 crc kubenswrapper[4909]: W1126 09:14:52.801273 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03a665a8_d345_4f57_b8fd_5d22c4d3804b.slice/crio-7b240d589b1f86c8c7f7e4d7735043b6ce9eadc0c8a79d7dc230267c339fa636 WatchSource:0}: Error finding container 7b240d589b1f86c8c7f7e4d7735043b6ce9eadc0c8a79d7dc230267c339fa636: Status 404 returned error can't find the container with id 7b240d589b1f86c8c7f7e4d7735043b6ce9eadc0c8a79d7dc230267c339fa636 Nov 26 09:14:53 crc kubenswrapper[4909]: I1126 09:14:53.801562 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" event={"ID":"03a665a8-d345-4f57-b8fd-5d22c4d3804b","Type":"ContainerStarted","Data":"ae7401898202f56317a620e7de5988b964d4aa2a4893f107444f6ae083c3cfdf"} Nov 26 09:14:53 crc kubenswrapper[4909]: I1126 09:14:53.802145 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" event={"ID":"03a665a8-d345-4f57-b8fd-5d22c4d3804b","Type":"ContainerStarted","Data":"7b240d589b1f86c8c7f7e4d7735043b6ce9eadc0c8a79d7dc230267c339fa636"} Nov 26 09:14:53 crc kubenswrapper[4909]: I1126 09:14:53.824129 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" podStartSLOduration=2.334387356 podStartE2EDuration="2.824093973s" podCreationTimestamp="2025-11-26 09:14:51 +0000 UTC" firstStartedPulling="2025-11-26 09:14:52.806241141 +0000 UTC m=+8064.952452317" lastFinishedPulling="2025-11-26 09:14:53.295947758 +0000 UTC m=+8065.442158934" observedRunningTime="2025-11-26 09:14:53.817466703 +0000 UTC m=+8065.963677909" watchObservedRunningTime="2025-11-26 09:14:53.824093973 +0000 UTC m=+8065.970305179" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.159803 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf"] Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.165028 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.171388 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.172960 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.182735 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba01935e-4d26-4e8c-adf5-5192f7caea6c-config-volume\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.182832 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wd7f\" (UniqueName: \"kubernetes.io/projected/ba01935e-4d26-4e8c-adf5-5192f7caea6c-kube-api-access-2wd7f\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.183325 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba01935e-4d26-4e8c-adf5-5192f7caea6c-secret-volume\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.199254 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf"] Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.286300 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba01935e-4d26-4e8c-adf5-5192f7caea6c-secret-volume\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.286392 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba01935e-4d26-4e8c-adf5-5192f7caea6c-config-volume\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.286420 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wd7f\" (UniqueName: \"kubernetes.io/projected/ba01935e-4d26-4e8c-adf5-5192f7caea6c-kube-api-access-2wd7f\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.287360 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba01935e-4d26-4e8c-adf5-5192f7caea6c-config-volume\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.295976 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba01935e-4d26-4e8c-adf5-5192f7caea6c-secret-volume\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.310183 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wd7f\" (UniqueName: \"kubernetes.io/projected/ba01935e-4d26-4e8c-adf5-5192f7caea6c-kube-api-access-2wd7f\") pod \"collect-profiles-29402475-spbtf\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.495193 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:00 crc kubenswrapper[4909]: W1126 09:15:00.994696 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba01935e_4d26_4e8c_adf5_5192f7caea6c.slice/crio-5bafff3b0b72a20fac2cfb5da579a951eb4294750729a20e6a7f81cf445ebe6b WatchSource:0}: Error finding container 5bafff3b0b72a20fac2cfb5da579a951eb4294750729a20e6a7f81cf445ebe6b: Status 404 returned error can't find the container with id 5bafff3b0b72a20fac2cfb5da579a951eb4294750729a20e6a7f81cf445ebe6b Nov 26 09:15:00 crc kubenswrapper[4909]: I1126 09:15:00.995254 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf"] Nov 26 09:15:01 crc kubenswrapper[4909]: E1126 09:15:01.575645 4909 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba01935e_4d26_4e8c_adf5_5192f7caea6c.slice/crio-90d083bd77dd526ab6b1f8c6ea6d6cdb609de374ca41e398d4733b51189cd750.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba01935e_4d26_4e8c_adf5_5192f7caea6c.slice/crio-conmon-90d083bd77dd526ab6b1f8c6ea6d6cdb609de374ca41e398d4733b51189cd750.scope\": RecentStats: unable to find data in memory cache]" Nov 26 09:15:01 crc kubenswrapper[4909]: I1126 09:15:01.889838 4909 generic.go:334] "Generic (PLEG): container finished" podID="ba01935e-4d26-4e8c-adf5-5192f7caea6c" containerID="90d083bd77dd526ab6b1f8c6ea6d6cdb609de374ca41e398d4733b51189cd750" exitCode=0 Nov 26 09:15:01 crc kubenswrapper[4909]: I1126 09:15:01.889907 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" event={"ID":"ba01935e-4d26-4e8c-adf5-5192f7caea6c","Type":"ContainerDied","Data":"90d083bd77dd526ab6b1f8c6ea6d6cdb609de374ca41e398d4733b51189cd750"} Nov 26 09:15:01 crc kubenswrapper[4909]: I1126 09:15:01.890357 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" event={"ID":"ba01935e-4d26-4e8c-adf5-5192f7caea6c","Type":"ContainerStarted","Data":"5bafff3b0b72a20fac2cfb5da579a951eb4294750729a20e6a7f81cf445ebe6b"} Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.306940 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.452103 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba01935e-4d26-4e8c-adf5-5192f7caea6c-config-volume\") pod \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.452262 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wd7f\" (UniqueName: \"kubernetes.io/projected/ba01935e-4d26-4e8c-adf5-5192f7caea6c-kube-api-access-2wd7f\") pod \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.452348 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba01935e-4d26-4e8c-adf5-5192f7caea6c-secret-volume\") pod \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\" (UID: \"ba01935e-4d26-4e8c-adf5-5192f7caea6c\") " Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.452926 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba01935e-4d26-4e8c-adf5-5192f7caea6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "ba01935e-4d26-4e8c-adf5-5192f7caea6c" (UID: "ba01935e-4d26-4e8c-adf5-5192f7caea6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.458166 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba01935e-4d26-4e8c-adf5-5192f7caea6c-kube-api-access-2wd7f" (OuterVolumeSpecName: "kube-api-access-2wd7f") pod "ba01935e-4d26-4e8c-adf5-5192f7caea6c" (UID: "ba01935e-4d26-4e8c-adf5-5192f7caea6c"). InnerVolumeSpecName "kube-api-access-2wd7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.458702 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01935e-4d26-4e8c-adf5-5192f7caea6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ba01935e-4d26-4e8c-adf5-5192f7caea6c" (UID: "ba01935e-4d26-4e8c-adf5-5192f7caea6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.555213 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba01935e-4d26-4e8c-adf5-5192f7caea6c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.555250 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wd7f\" (UniqueName: \"kubernetes.io/projected/ba01935e-4d26-4e8c-adf5-5192f7caea6c-kube-api-access-2wd7f\") on node \"crc\" DevicePath \"\"" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.555262 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba01935e-4d26-4e8c-adf5-5192f7caea6c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.919944 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" event={"ID":"ba01935e-4d26-4e8c-adf5-5192f7caea6c","Type":"ContainerDied","Data":"5bafff3b0b72a20fac2cfb5da579a951eb4294750729a20e6a7f81cf445ebe6b"} Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.919983 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bafff3b0b72a20fac2cfb5da579a951eb4294750729a20e6a7f81cf445ebe6b" Nov 26 09:15:03 crc kubenswrapper[4909]: I1126 09:15:03.919998 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402475-spbtf" Nov 26 09:15:04 crc kubenswrapper[4909]: I1126 09:15:04.417116 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd"] Nov 26 09:15:04 crc kubenswrapper[4909]: I1126 09:15:04.431354 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402430-48kfd"] Nov 26 09:15:04 crc kubenswrapper[4909]: I1126 09:15:04.511816 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ebc913-9174-47f4-a1c7-c299a0aba8dd" path="/var/lib/kubelet/pods/63ebc913-9174-47f4-a1c7-c299a0aba8dd/volumes" Nov 26 09:15:46 crc kubenswrapper[4909]: I1126 09:15:46.905233 4909 scope.go:117] "RemoveContainer" containerID="b6602743c38cd37c83774262bb295d927b50a52e64d69325157a302506694128" Nov 26 09:17:07 crc kubenswrapper[4909]: I1126 09:17:07.300825 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:17:07 crc kubenswrapper[4909]: I1126 09:17:07.301408 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:17:37 crc kubenswrapper[4909]: I1126 09:17:37.301526 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:17:37 crc kubenswrapper[4909]: I1126 09:17:37.302238 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.300579 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.301220 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.301277 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.302235 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc39db933dd67d1f3d921b87fa72fde5f9f5072131e6833356ff3d7b9e58c919"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.302305 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://dc39db933dd67d1f3d921b87fa72fde5f9f5072131e6833356ff3d7b9e58c919" gracePeriod=600 Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.818970 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="dc39db933dd67d1f3d921b87fa72fde5f9f5072131e6833356ff3d7b9e58c919" exitCode=0 Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.819055 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"dc39db933dd67d1f3d921b87fa72fde5f9f5072131e6833356ff3d7b9e58c919"} Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.819216 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d"} Nov 26 09:18:07 crc kubenswrapper[4909]: I1126 09:18:07.819239 4909 scope.go:117] "RemoveContainer" containerID="f4b9f7879da52dff81b906c4b22ee790fcf367ab2c0b61d1340e12da2ce23ae4" Nov 26 09:18:51 crc kubenswrapper[4909]: I1126 09:18:51.329157 4909 generic.go:334] "Generic (PLEG): container finished" podID="03a665a8-d345-4f57-b8fd-5d22c4d3804b" containerID="ae7401898202f56317a620e7de5988b964d4aa2a4893f107444f6ae083c3cfdf" exitCode=0 Nov 26 09:18:51 crc kubenswrapper[4909]: I1126 09:18:51.329233 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" event={"ID":"03a665a8-d345-4f57-b8fd-5d22c4d3804b","Type":"ContainerDied","Data":"ae7401898202f56317a620e7de5988b964d4aa2a4893f107444f6ae083c3cfdf"} Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.804282 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.858287 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-inventory\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.858399 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-1\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.858633 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceph\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.858763 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6z74\" (UniqueName: \"kubernetes.io/projected/03a665a8-d345-4f57-b8fd-5d22c4d3804b-kube-api-access-n6z74\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.858858 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ssh-key\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.858917 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-0\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.858974 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-2\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.859014 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-telemetry-combined-ca-bundle\") pod \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\" (UID: \"03a665a8-d345-4f57-b8fd-5d22c4d3804b\") " Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.865200 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceph" (OuterVolumeSpecName: "ceph") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.865915 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.876866 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a665a8-d345-4f57-b8fd-5d22c4d3804b-kube-api-access-n6z74" (OuterVolumeSpecName: "kube-api-access-n6z74") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "kube-api-access-n6z74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.892130 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.892311 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-inventory" (OuterVolumeSpecName: "inventory") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.894247 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.901192 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.901512 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "03a665a8-d345-4f57-b8fd-5d22c4d3804b" (UID: "03a665a8-d345-4f57-b8fd-5d22c4d3804b"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962358 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962396 4909 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962410 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962424 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6z74\" (UniqueName: \"kubernetes.io/projected/03a665a8-d345-4f57-b8fd-5d22c4d3804b-kube-api-access-n6z74\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962437 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962446 4909 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962454 4909 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:52 crc kubenswrapper[4909]: I1126 09:18:52.962681 4909 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a665a8-d345-4f57-b8fd-5d22c4d3804b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.360517 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" event={"ID":"03a665a8-d345-4f57-b8fd-5d22c4d3804b","Type":"ContainerDied","Data":"7b240d589b1f86c8c7f7e4d7735043b6ce9eadc0c8a79d7dc230267c339fa636"} Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.360573 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b240d589b1f86c8c7f7e4d7735043b6ce9eadc0c8a79d7dc230267c339fa636" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.360665 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-d5fkl" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.462324 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-2jjv5"] Nov 26 09:18:53 crc kubenswrapper[4909]: E1126 09:18:53.462937 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba01935e-4d26-4e8c-adf5-5192f7caea6c" containerName="collect-profiles" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.462958 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba01935e-4d26-4e8c-adf5-5192f7caea6c" containerName="collect-profiles" Nov 26 09:18:53 crc kubenswrapper[4909]: E1126 09:18:53.462992 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a665a8-d345-4f57-b8fd-5d22c4d3804b" containerName="telemetry-openstack-openstack-cell1" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.463000 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a665a8-d345-4f57-b8fd-5d22c4d3804b" containerName="telemetry-openstack-openstack-cell1" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.463275 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a665a8-d345-4f57-b8fd-5d22c4d3804b" containerName="telemetry-openstack-openstack-cell1" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.463314 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba01935e-4d26-4e8c-adf5-5192f7caea6c" containerName="collect-profiles" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.464216 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.466708 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.466949 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.467198 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.467347 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.467493 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.477924 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-2jjv5"] Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.577442 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.577946 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.578101 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd75q\" (UniqueName: \"kubernetes.io/projected/57f07bac-a5ba-488c-91f2-e925ad366f26-kube-api-access-nd75q\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.578287 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.578364 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.578466 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.679866 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd75q\" (UniqueName: \"kubernetes.io/projected/57f07bac-a5ba-488c-91f2-e925ad366f26-kube-api-access-nd75q\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.679987 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.680010 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.680050 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.680105 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.680120 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.684760 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.684792 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.685307 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.686193 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.686318 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.695405 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd75q\" (UniqueName: \"kubernetes.io/projected/57f07bac-a5ba-488c-91f2-e925ad366f26-kube-api-access-nd75q\") pod \"neutron-sriov-openstack-openstack-cell1-2jjv5\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:53 crc kubenswrapper[4909]: I1126 09:18:53.786244 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:18:54 crc kubenswrapper[4909]: W1126 09:18:54.322497 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f07bac_a5ba_488c_91f2_e925ad366f26.slice/crio-19648a1c5083c73f070e02b89e95abfa5b11f8376d7a4bf3411372a6eaf40c88 WatchSource:0}: Error finding container 19648a1c5083c73f070e02b89e95abfa5b11f8376d7a4bf3411372a6eaf40c88: Status 404 returned error can't find the container with id 19648a1c5083c73f070e02b89e95abfa5b11f8376d7a4bf3411372a6eaf40c88 Nov 26 09:18:54 crc kubenswrapper[4909]: I1126 09:18:54.323491 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-2jjv5"] Nov 26 09:18:54 crc kubenswrapper[4909]: I1126 09:18:54.325471 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:18:54 crc kubenswrapper[4909]: I1126 09:18:54.371042 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" event={"ID":"57f07bac-a5ba-488c-91f2-e925ad366f26","Type":"ContainerStarted","Data":"19648a1c5083c73f070e02b89e95abfa5b11f8376d7a4bf3411372a6eaf40c88"} Nov 26 09:18:55 crc kubenswrapper[4909]: I1126 09:18:55.385836 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" event={"ID":"57f07bac-a5ba-488c-91f2-e925ad366f26","Type":"ContainerStarted","Data":"b9d65c3b17c626023715c35d63aeac1f5e5d0b3b803f18e05bd1a1905ca33d6a"} Nov 26 09:18:55 crc kubenswrapper[4909]: I1126 09:18:55.422067 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" podStartSLOduration=1.777038928 podStartE2EDuration="2.422049161s" podCreationTimestamp="2025-11-26 09:18:53 +0000 UTC" firstStartedPulling="2025-11-26 09:18:54.325271442 +0000 UTC m=+8306.471482608" lastFinishedPulling="2025-11-26 09:18:54.970281685 +0000 UTC m=+8307.116492841" observedRunningTime="2025-11-26 09:18:55.411275146 +0000 UTC m=+8307.557486322" watchObservedRunningTime="2025-11-26 09:18:55.422049161 +0000 UTC m=+8307.568260327" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.152508 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qlhwh"] Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.155548 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.184758 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlhwh"] Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.335244 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-utilities\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.335292 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkbqg\" (UniqueName: \"kubernetes.io/projected/5e4833ae-c75c-4b8b-9542-efe6f4989893-kube-api-access-mkbqg\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.335408 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-catalog-content\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.437110 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-utilities\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.437191 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkbqg\" (UniqueName: \"kubernetes.io/projected/5e4833ae-c75c-4b8b-9542-efe6f4989893-kube-api-access-mkbqg\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.437457 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-catalog-content\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.437693 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-utilities\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.437930 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-catalog-content\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.463930 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkbqg\" (UniqueName: \"kubernetes.io/projected/5e4833ae-c75c-4b8b-9542-efe6f4989893-kube-api-access-mkbqg\") pod \"redhat-marketplace-qlhwh\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.474935 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:10 crc kubenswrapper[4909]: I1126 09:19:10.949116 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlhwh"] Nov 26 09:19:11 crc kubenswrapper[4909]: I1126 09:19:11.540977 4909 generic.go:334] "Generic (PLEG): container finished" podID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerID="a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6" exitCode=0 Nov 26 09:19:11 crc kubenswrapper[4909]: I1126 09:19:11.541018 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlhwh" event={"ID":"5e4833ae-c75c-4b8b-9542-efe6f4989893","Type":"ContainerDied","Data":"a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6"} Nov 26 09:19:11 crc kubenswrapper[4909]: I1126 09:19:11.541041 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlhwh" event={"ID":"5e4833ae-c75c-4b8b-9542-efe6f4989893","Type":"ContainerStarted","Data":"cbd7b3f7030aa2b37a6c761f0e4cf561dcc83c894b8878c5150f242795ab3936"} Nov 26 09:19:12 crc kubenswrapper[4909]: I1126 09:19:12.555558 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlhwh" event={"ID":"5e4833ae-c75c-4b8b-9542-efe6f4989893","Type":"ContainerStarted","Data":"1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6"} Nov 26 09:19:13 crc kubenswrapper[4909]: I1126 09:19:13.568964 4909 generic.go:334] "Generic (PLEG): container finished" podID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerID="1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6" exitCode=0 Nov 26 09:19:13 crc kubenswrapper[4909]: I1126 09:19:13.569020 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlhwh" event={"ID":"5e4833ae-c75c-4b8b-9542-efe6f4989893","Type":"ContainerDied","Data":"1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6"} Nov 26 09:19:15 crc kubenswrapper[4909]: I1126 09:19:15.589088 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlhwh" event={"ID":"5e4833ae-c75c-4b8b-9542-efe6f4989893","Type":"ContainerStarted","Data":"ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8"} Nov 26 09:19:15 crc kubenswrapper[4909]: I1126 09:19:15.614913 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qlhwh" podStartSLOduration=2.625791444 podStartE2EDuration="5.614892833s" podCreationTimestamp="2025-11-26 09:19:10 +0000 UTC" firstStartedPulling="2025-11-26 09:19:11.54339229 +0000 UTC m=+8323.689603456" lastFinishedPulling="2025-11-26 09:19:14.532493669 +0000 UTC m=+8326.678704845" observedRunningTime="2025-11-26 09:19:15.606833232 +0000 UTC m=+8327.753044418" watchObservedRunningTime="2025-11-26 09:19:15.614892833 +0000 UTC m=+8327.761103999" Nov 26 09:19:20 crc kubenswrapper[4909]: I1126 09:19:20.475677 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:20 crc kubenswrapper[4909]: I1126 09:19:20.476382 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:20 crc kubenswrapper[4909]: I1126 09:19:20.530431 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:20 crc kubenswrapper[4909]: I1126 09:19:20.679145 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:20 crc kubenswrapper[4909]: I1126 09:19:20.773672 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlhwh"] Nov 26 09:19:22 crc kubenswrapper[4909]: I1126 09:19:22.651934 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qlhwh" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="registry-server" containerID="cri-o://ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8" gracePeriod=2 Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.176921 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.319312 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-utilities\") pod \"5e4833ae-c75c-4b8b-9542-efe6f4989893\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.319770 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkbqg\" (UniqueName: \"kubernetes.io/projected/5e4833ae-c75c-4b8b-9542-efe6f4989893-kube-api-access-mkbqg\") pod \"5e4833ae-c75c-4b8b-9542-efe6f4989893\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.319882 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-catalog-content\") pod \"5e4833ae-c75c-4b8b-9542-efe6f4989893\" (UID: \"5e4833ae-c75c-4b8b-9542-efe6f4989893\") " Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.320547 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-utilities" (OuterVolumeSpecName: "utilities") pod "5e4833ae-c75c-4b8b-9542-efe6f4989893" (UID: "5e4833ae-c75c-4b8b-9542-efe6f4989893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.325368 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4833ae-c75c-4b8b-9542-efe6f4989893-kube-api-access-mkbqg" (OuterVolumeSpecName: "kube-api-access-mkbqg") pod "5e4833ae-c75c-4b8b-9542-efe6f4989893" (UID: "5e4833ae-c75c-4b8b-9542-efe6f4989893"). InnerVolumeSpecName "kube-api-access-mkbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.337190 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e4833ae-c75c-4b8b-9542-efe6f4989893" (UID: "5e4833ae-c75c-4b8b-9542-efe6f4989893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.421934 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.421963 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkbqg\" (UniqueName: \"kubernetes.io/projected/5e4833ae-c75c-4b8b-9542-efe6f4989893-kube-api-access-mkbqg\") on node \"crc\" DevicePath \"\"" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.421973 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4833ae-c75c-4b8b-9542-efe6f4989893-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.667405 4909 generic.go:334] "Generic (PLEG): container finished" podID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerID="ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8" exitCode=0 Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.667455 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlhwh" event={"ID":"5e4833ae-c75c-4b8b-9542-efe6f4989893","Type":"ContainerDied","Data":"ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8"} Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.667475 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlhwh" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.667502 4909 scope.go:117] "RemoveContainer" containerID="ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.667489 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlhwh" event={"ID":"5e4833ae-c75c-4b8b-9542-efe6f4989893","Type":"ContainerDied","Data":"cbd7b3f7030aa2b37a6c761f0e4cf561dcc83c894b8878c5150f242795ab3936"} Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.695964 4909 scope.go:117] "RemoveContainer" containerID="1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.704561 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlhwh"] Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.714373 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlhwh"] Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.730317 4909 scope.go:117] "RemoveContainer" containerID="a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.761029 4909 scope.go:117] "RemoveContainer" containerID="ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8" Nov 26 09:19:23 crc kubenswrapper[4909]: E1126 09:19:23.761514 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8\": container with ID starting with ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8 not found: ID does not exist" containerID="ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.761544 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8"} err="failed to get container status \"ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8\": rpc error: code = NotFound desc = could not find container \"ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8\": container with ID starting with ae01da43ec6548bcdda83512cfaec00bc6a6c80aa7b9812802526ad48d19a6f8 not found: ID does not exist" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.761565 4909 scope.go:117] "RemoveContainer" containerID="1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6" Nov 26 09:19:23 crc kubenswrapper[4909]: E1126 09:19:23.762136 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6\": container with ID starting with 1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6 not found: ID does not exist" containerID="1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.762166 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6"} err="failed to get container status \"1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6\": rpc error: code = NotFound desc = could not find container \"1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6\": container with ID starting with 1b8ef338f1d3ede3af3bea0690f9d08973f20830dc9dcf092df253cc131f63c6 not found: ID does not exist" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.762180 4909 scope.go:117] "RemoveContainer" containerID="a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6" Nov 26 09:19:23 crc kubenswrapper[4909]: E1126 09:19:23.762629 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6\": container with ID starting with a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6 not found: ID does not exist" containerID="a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6" Nov 26 09:19:23 crc kubenswrapper[4909]: I1126 09:19:23.762669 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6"} err="failed to get container status \"a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6\": rpc error: code = NotFound desc = could not find container \"a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6\": container with ID starting with a5b307a51bcb67ae090ee184d79958ac070495a26acf207d36ef46947ca39ba6 not found: ID does not exist" Nov 26 09:19:24 crc kubenswrapper[4909]: I1126 09:19:24.511566 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" path="/var/lib/kubelet/pods/5e4833ae-c75c-4b8b-9542-efe6f4989893/volumes" Nov 26 09:20:07 crc kubenswrapper[4909]: I1126 09:20:07.301302 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:20:07 crc kubenswrapper[4909]: I1126 09:20:07.302066 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.291622 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cbhdr"] Nov 26 09:20:33 crc kubenswrapper[4909]: E1126 09:20:33.292515 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="registry-server" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.292526 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="registry-server" Nov 26 09:20:33 crc kubenswrapper[4909]: E1126 09:20:33.292552 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="extract-content" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.292558 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="extract-content" Nov 26 09:20:33 crc kubenswrapper[4909]: E1126 09:20:33.292577 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="extract-utilities" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.292584 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="extract-utilities" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.293670 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4833ae-c75c-4b8b-9542-efe6f4989893" containerName="registry-server" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.295422 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.303628 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbhdr"] Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.323104 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-catalog-content\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.323217 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvxx\" (UniqueName: \"kubernetes.io/projected/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-kube-api-access-phvxx\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.323256 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-utilities\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.424997 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvxx\" (UniqueName: \"kubernetes.io/projected/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-kube-api-access-phvxx\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.425138 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-utilities\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.425364 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-catalog-content\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.425907 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-catalog-content\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.425976 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-utilities\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.447870 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvxx\" (UniqueName: \"kubernetes.io/projected/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-kube-api-access-phvxx\") pod \"community-operators-cbhdr\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:33 crc kubenswrapper[4909]: I1126 09:20:33.625430 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:34 crc kubenswrapper[4909]: I1126 09:20:34.126910 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbhdr"] Nov 26 09:20:34 crc kubenswrapper[4909]: I1126 09:20:34.436062 4909 generic.go:334] "Generic (PLEG): container finished" podID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerID="dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e" exitCode=0 Nov 26 09:20:34 crc kubenswrapper[4909]: I1126 09:20:34.436496 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbhdr" event={"ID":"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa","Type":"ContainerDied","Data":"dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e"} Nov 26 09:20:34 crc kubenswrapper[4909]: I1126 09:20:34.436527 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbhdr" event={"ID":"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa","Type":"ContainerStarted","Data":"c6373b0cedd14067cd2e95448e9fcc790c5738097077b3849268f5ec03aeb732"} Nov 26 09:20:36 crc kubenswrapper[4909]: I1126 09:20:36.458075 4909 generic.go:334] "Generic (PLEG): container finished" podID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerID="3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8" exitCode=0 Nov 26 09:20:36 crc kubenswrapper[4909]: I1126 09:20:36.458177 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbhdr" event={"ID":"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa","Type":"ContainerDied","Data":"3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8"} Nov 26 09:20:37 crc kubenswrapper[4909]: I1126 09:20:37.301222 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:20:37 crc kubenswrapper[4909]: I1126 09:20:37.301634 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:20:37 crc kubenswrapper[4909]: I1126 09:20:37.472520 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbhdr" event={"ID":"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa","Type":"ContainerStarted","Data":"91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932"} Nov 26 09:20:37 crc kubenswrapper[4909]: I1126 09:20:37.502711 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cbhdr" podStartSLOduration=1.9580777980000001 podStartE2EDuration="4.502691492s" podCreationTimestamp="2025-11-26 09:20:33 +0000 UTC" firstStartedPulling="2025-11-26 09:20:34.439477115 +0000 UTC m=+8406.585688281" lastFinishedPulling="2025-11-26 09:20:36.984090809 +0000 UTC m=+8409.130301975" observedRunningTime="2025-11-26 09:20:37.489133571 +0000 UTC m=+8409.635344737" watchObservedRunningTime="2025-11-26 09:20:37.502691492 +0000 UTC m=+8409.648902658" Nov 26 09:20:43 crc kubenswrapper[4909]: I1126 09:20:43.626524 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:43 crc kubenswrapper[4909]: I1126 09:20:43.627161 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:43 crc kubenswrapper[4909]: I1126 09:20:43.709386 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:44 crc kubenswrapper[4909]: I1126 09:20:44.589007 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:44 crc kubenswrapper[4909]: I1126 09:20:44.637030 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbhdr"] Nov 26 09:20:46 crc kubenswrapper[4909]: I1126 09:20:46.561202 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cbhdr" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="registry-server" containerID="cri-o://91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932" gracePeriod=2 Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.115811 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.244310 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-utilities\") pod \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.244373 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phvxx\" (UniqueName: \"kubernetes.io/projected/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-kube-api-access-phvxx\") pod \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.244712 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-catalog-content\") pod \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\" (UID: \"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa\") " Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.246470 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-utilities" (OuterVolumeSpecName: "utilities") pod "a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" (UID: "a45124d1-a6a2-48c1-8b6e-6bc8719d16aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.246727 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.251319 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-kube-api-access-phvxx" (OuterVolumeSpecName: "kube-api-access-phvxx") pod "a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" (UID: "a45124d1-a6a2-48c1-8b6e-6bc8719d16aa"). InnerVolumeSpecName "kube-api-access-phvxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.296512 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" (UID: "a45124d1-a6a2-48c1-8b6e-6bc8719d16aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.348532 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.348806 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phvxx\" (UniqueName: \"kubernetes.io/projected/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa-kube-api-access-phvxx\") on node \"crc\" DevicePath \"\"" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.581350 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbhdr" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.581746 4909 generic.go:334] "Generic (PLEG): container finished" podID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerID="91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932" exitCode=0 Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.581798 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbhdr" event={"ID":"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa","Type":"ContainerDied","Data":"91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932"} Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.581829 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbhdr" event={"ID":"a45124d1-a6a2-48c1-8b6e-6bc8719d16aa","Type":"ContainerDied","Data":"c6373b0cedd14067cd2e95448e9fcc790c5738097077b3849268f5ec03aeb732"} Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.581849 4909 scope.go:117] "RemoveContainer" containerID="91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.613351 4909 scope.go:117] "RemoveContainer" containerID="3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.631701 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbhdr"] Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.643404 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cbhdr"] Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.656487 4909 scope.go:117] "RemoveContainer" containerID="dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.689101 4909 scope.go:117] "RemoveContainer" containerID="91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932" Nov 26 09:20:47 crc kubenswrapper[4909]: E1126 09:20:47.690731 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932\": container with ID starting with 91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932 not found: ID does not exist" containerID="91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.690767 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932"} err="failed to get container status \"91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932\": rpc error: code = NotFound desc = could not find container \"91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932\": container with ID starting with 91f526920cae396fc645496d1700c123a0fe91ab9d33f16faccbbd973e428932 not found: ID does not exist" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.690787 4909 scope.go:117] "RemoveContainer" containerID="3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8" Nov 26 09:20:47 crc kubenswrapper[4909]: E1126 09:20:47.692190 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8\": container with ID starting with 3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8 not found: ID does not exist" containerID="3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.692245 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8"} err="failed to get container status \"3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8\": rpc error: code = NotFound desc = could not find container \"3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8\": container with ID starting with 3b1e715a828763d0db484d0b543e8a6d07fd5e55a697d7b7d80bd7cfcdcc06c8 not found: ID does not exist" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.692278 4909 scope.go:117] "RemoveContainer" containerID="dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e" Nov 26 09:20:47 crc kubenswrapper[4909]: E1126 09:20:47.692560 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e\": container with ID starting with dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e not found: ID does not exist" containerID="dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e" Nov 26 09:20:47 crc kubenswrapper[4909]: I1126 09:20:47.692613 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e"} err="failed to get container status \"dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e\": rpc error: code = NotFound desc = could not find container \"dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e\": container with ID starting with dd69500b30007b2bc9a4ce8e6f863049acc179432a41b8e02b926f65010b851e not found: ID does not exist" Nov 26 09:20:48 crc kubenswrapper[4909]: I1126 09:20:48.516293 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" path="/var/lib/kubelet/pods/a45124d1-a6a2-48c1-8b6e-6bc8719d16aa/volumes" Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.301177 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.301869 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.301926 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.302892 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.302964 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" gracePeriod=600 Nov 26 09:21:07 crc kubenswrapper[4909]: E1126 09:21:07.425799 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.822433 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" exitCode=0 Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.822493 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d"} Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.823141 4909 scope.go:117] "RemoveContainer" containerID="dc39db933dd67d1f3d921b87fa72fde5f9f5072131e6833356ff3d7b9e58c919" Nov 26 09:21:07 crc kubenswrapper[4909]: I1126 09:21:07.824286 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:21:07 crc kubenswrapper[4909]: E1126 09:21:07.824968 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:21:21 crc kubenswrapper[4909]: I1126 09:21:21.499856 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:21:21 crc kubenswrapper[4909]: E1126 09:21:21.500694 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:21:32 crc kubenswrapper[4909]: I1126 09:21:32.499086 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:21:32 crc kubenswrapper[4909]: E1126 09:21:32.499957 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:21:43 crc kubenswrapper[4909]: I1126 09:21:43.499840 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:21:43 crc kubenswrapper[4909]: E1126 09:21:43.501474 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:21:54 crc kubenswrapper[4909]: I1126 09:21:54.499450 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:21:54 crc kubenswrapper[4909]: E1126 09:21:54.500947 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:22:05 crc kubenswrapper[4909]: I1126 09:22:05.498953 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:22:05 crc kubenswrapper[4909]: E1126 09:22:05.499738 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:22:19 crc kubenswrapper[4909]: I1126 09:22:19.500066 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:22:19 crc kubenswrapper[4909]: E1126 09:22:19.501202 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:22:30 crc kubenswrapper[4909]: I1126 09:22:30.500054 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:22:30 crc kubenswrapper[4909]: E1126 09:22:30.500716 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:22:42 crc kubenswrapper[4909]: I1126 09:22:42.499316 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:22:42 crc kubenswrapper[4909]: E1126 09:22:42.500182 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:22:55 crc kubenswrapper[4909]: I1126 09:22:55.499440 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:22:55 crc kubenswrapper[4909]: E1126 09:22:55.500314 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.102114 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xqxv6"] Nov 26 09:22:56 crc kubenswrapper[4909]: E1126 09:22:56.103056 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="extract-content" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.103084 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="extract-content" Nov 26 09:22:56 crc kubenswrapper[4909]: E1126 09:22:56.103127 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="extract-utilities" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.103136 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="extract-utilities" Nov 26 09:22:56 crc kubenswrapper[4909]: E1126 09:22:56.103146 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="registry-server" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.103153 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="registry-server" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.103454 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45124d1-a6a2-48c1-8b6e-6bc8719d16aa" containerName="registry-server" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.105465 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.124486 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqxv6"] Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.295028 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-catalog-content\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.295144 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-utilities\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.295238 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfxhm\" (UniqueName: \"kubernetes.io/projected/b3b026b5-33ad-4893-915c-9818892d6d99-kube-api-access-cfxhm\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.397464 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-catalog-content\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.397586 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-utilities\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.397701 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfxhm\" (UniqueName: \"kubernetes.io/projected/b3b026b5-33ad-4893-915c-9818892d6d99-kube-api-access-cfxhm\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.398170 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-catalog-content\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.398438 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-utilities\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.435940 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfxhm\" (UniqueName: \"kubernetes.io/projected/b3b026b5-33ad-4893-915c-9818892d6d99-kube-api-access-cfxhm\") pod \"certified-operators-xqxv6\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:56 crc kubenswrapper[4909]: I1126 09:22:56.728003 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:22:57 crc kubenswrapper[4909]: I1126 09:22:57.246542 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqxv6"] Nov 26 09:22:58 crc kubenswrapper[4909]: I1126 09:22:58.070750 4909 generic.go:334] "Generic (PLEG): container finished" podID="b3b026b5-33ad-4893-915c-9818892d6d99" containerID="41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9" exitCode=0 Nov 26 09:22:58 crc kubenswrapper[4909]: I1126 09:22:58.070852 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqxv6" event={"ID":"b3b026b5-33ad-4893-915c-9818892d6d99","Type":"ContainerDied","Data":"41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9"} Nov 26 09:22:58 crc kubenswrapper[4909]: I1126 09:22:58.071081 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqxv6" event={"ID":"b3b026b5-33ad-4893-915c-9818892d6d99","Type":"ContainerStarted","Data":"f075cb68a9490c194609738e9e076aa599e393a3b3fb9222507e704ae2abfc6a"} Nov 26 09:22:59 crc kubenswrapper[4909]: I1126 09:22:59.084014 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqxv6" event={"ID":"b3b026b5-33ad-4893-915c-9818892d6d99","Type":"ContainerStarted","Data":"59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1"} Nov 26 09:23:00 crc kubenswrapper[4909]: I1126 09:23:00.101242 4909 generic.go:334] "Generic (PLEG): container finished" podID="b3b026b5-33ad-4893-915c-9818892d6d99" containerID="59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1" exitCode=0 Nov 26 09:23:00 crc kubenswrapper[4909]: I1126 09:23:00.101302 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqxv6" event={"ID":"b3b026b5-33ad-4893-915c-9818892d6d99","Type":"ContainerDied","Data":"59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1"} Nov 26 09:23:01 crc kubenswrapper[4909]: I1126 09:23:01.112561 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqxv6" event={"ID":"b3b026b5-33ad-4893-915c-9818892d6d99","Type":"ContainerStarted","Data":"87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7"} Nov 26 09:23:01 crc kubenswrapper[4909]: I1126 09:23:01.137798 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xqxv6" podStartSLOduration=2.526683074 podStartE2EDuration="5.137774728s" podCreationTimestamp="2025-11-26 09:22:56 +0000 UTC" firstStartedPulling="2025-11-26 09:22:58.073561412 +0000 UTC m=+8550.219772578" lastFinishedPulling="2025-11-26 09:23:00.684653036 +0000 UTC m=+8552.830864232" observedRunningTime="2025-11-26 09:23:01.125965515 +0000 UTC m=+8553.272176681" watchObservedRunningTime="2025-11-26 09:23:01.137774728 +0000 UTC m=+8553.283985894" Nov 26 09:23:06 crc kubenswrapper[4909]: I1126 09:23:06.729685 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:23:06 crc kubenswrapper[4909]: I1126 09:23:06.730201 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:23:06 crc kubenswrapper[4909]: I1126 09:23:06.790224 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:23:07 crc kubenswrapper[4909]: I1126 09:23:07.215261 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:23:07 crc kubenswrapper[4909]: I1126 09:23:07.271682 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqxv6"] Nov 26 09:23:08 crc kubenswrapper[4909]: I1126 09:23:08.508445 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:23:08 crc kubenswrapper[4909]: E1126 09:23:08.508720 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.189191 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xqxv6" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="registry-server" containerID="cri-o://87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7" gracePeriod=2 Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.698230 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.715951 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfxhm\" (UniqueName: \"kubernetes.io/projected/b3b026b5-33ad-4893-915c-9818892d6d99-kube-api-access-cfxhm\") pod \"b3b026b5-33ad-4893-915c-9818892d6d99\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.716022 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-utilities\") pod \"b3b026b5-33ad-4893-915c-9818892d6d99\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.716187 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-catalog-content\") pod \"b3b026b5-33ad-4893-915c-9818892d6d99\" (UID: \"b3b026b5-33ad-4893-915c-9818892d6d99\") " Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.721438 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-utilities" (OuterVolumeSpecName: "utilities") pod "b3b026b5-33ad-4893-915c-9818892d6d99" (UID: "b3b026b5-33ad-4893-915c-9818892d6d99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.726181 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b026b5-33ad-4893-915c-9818892d6d99-kube-api-access-cfxhm" (OuterVolumeSpecName: "kube-api-access-cfxhm") pod "b3b026b5-33ad-4893-915c-9818892d6d99" (UID: "b3b026b5-33ad-4893-915c-9818892d6d99"). InnerVolumeSpecName "kube-api-access-cfxhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.779876 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3b026b5-33ad-4893-915c-9818892d6d99" (UID: "b3b026b5-33ad-4893-915c-9818892d6d99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.818967 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.819005 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfxhm\" (UniqueName: \"kubernetes.io/projected/b3b026b5-33ad-4893-915c-9818892d6d99-kube-api-access-cfxhm\") on node \"crc\" DevicePath \"\"" Nov 26 09:23:09 crc kubenswrapper[4909]: I1126 09:23:09.819015 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b026b5-33ad-4893-915c-9818892d6d99-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.203265 4909 generic.go:334] "Generic (PLEG): container finished" podID="b3b026b5-33ad-4893-915c-9818892d6d99" containerID="87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7" exitCode=0 Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.203348 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqxv6" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.203339 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqxv6" event={"ID":"b3b026b5-33ad-4893-915c-9818892d6d99","Type":"ContainerDied","Data":"87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7"} Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.203528 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqxv6" event={"ID":"b3b026b5-33ad-4893-915c-9818892d6d99","Type":"ContainerDied","Data":"f075cb68a9490c194609738e9e076aa599e393a3b3fb9222507e704ae2abfc6a"} Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.203560 4909 scope.go:117] "RemoveContainer" containerID="87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.242159 4909 scope.go:117] "RemoveContainer" containerID="59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.245100 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqxv6"] Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.263985 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xqxv6"] Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.273844 4909 scope.go:117] "RemoveContainer" containerID="41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.316515 4909 scope.go:117] "RemoveContainer" containerID="87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7" Nov 26 09:23:10 crc kubenswrapper[4909]: E1126 09:23:10.317034 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7\": container with ID starting with 87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7 not found: ID does not exist" containerID="87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.317068 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7"} err="failed to get container status \"87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7\": rpc error: code = NotFound desc = could not find container \"87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7\": container with ID starting with 87d989ccb21e1c2d8cfba597f49a0b91cd52d5e6498e1539f0015e5de69e30f7 not found: ID does not exist" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.317090 4909 scope.go:117] "RemoveContainer" containerID="59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1" Nov 26 09:23:10 crc kubenswrapper[4909]: E1126 09:23:10.317346 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1\": container with ID starting with 59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1 not found: ID does not exist" containerID="59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.317371 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1"} err="failed to get container status \"59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1\": rpc error: code = NotFound desc = could not find container \"59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1\": container with ID starting with 59d37c2255d48557e57c91ba480be9c398f161edf5591bbc3ac708ecb7d339f1 not found: ID does not exist" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.317386 4909 scope.go:117] "RemoveContainer" containerID="41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9" Nov 26 09:23:10 crc kubenswrapper[4909]: E1126 09:23:10.317639 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9\": container with ID starting with 41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9 not found: ID does not exist" containerID="41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.317663 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9"} err="failed to get container status \"41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9\": rpc error: code = NotFound desc = could not find container \"41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9\": container with ID starting with 41de20f979622985944f7105e144b6ad6ce92a18b6cb271f962e61ae341a4cf9 not found: ID does not exist" Nov 26 09:23:10 crc kubenswrapper[4909]: I1126 09:23:10.512405 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" path="/var/lib/kubelet/pods/b3b026b5-33ad-4893-915c-9818892d6d99/volumes" Nov 26 09:23:22 crc kubenswrapper[4909]: I1126 09:23:22.499585 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:23:22 crc kubenswrapper[4909]: E1126 09:23:22.501000 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:23:34 crc kubenswrapper[4909]: I1126 09:23:34.499163 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:23:34 crc kubenswrapper[4909]: E1126 09:23:34.499960 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:23:47 crc kubenswrapper[4909]: I1126 09:23:47.499055 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:23:47 crc kubenswrapper[4909]: E1126 09:23:47.499909 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:24:01 crc kubenswrapper[4909]: I1126 09:24:01.502433 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:24:01 crc kubenswrapper[4909]: E1126 09:24:01.503394 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:24:14 crc kubenswrapper[4909]: I1126 09:24:14.499088 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:24:14 crc kubenswrapper[4909]: E1126 09:24:14.499859 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:24:26 crc kubenswrapper[4909]: I1126 09:24:26.499157 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:24:26 crc kubenswrapper[4909]: E1126 09:24:26.499852 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.375915 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sm2dl"] Nov 26 09:24:30 crc kubenswrapper[4909]: E1126 09:24:30.377134 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="extract-content" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.377155 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="extract-content" Nov 26 09:24:30 crc kubenswrapper[4909]: E1126 09:24:30.377177 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="extract-utilities" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.377185 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="extract-utilities" Nov 26 09:24:30 crc kubenswrapper[4909]: E1126 09:24:30.377221 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="registry-server" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.377229 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="registry-server" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.377503 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b026b5-33ad-4893-915c-9818892d6d99" containerName="registry-server" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.379489 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.389207 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sm2dl"] Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.418447 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-catalog-content\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.418660 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-utilities\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.418791 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsjk5\" (UniqueName: \"kubernetes.io/projected/387b9541-ea3f-443e-9af1-8749691a25d0-kube-api-access-nsjk5\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.521180 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-utilities\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.521341 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsjk5\" (UniqueName: \"kubernetes.io/projected/387b9541-ea3f-443e-9af1-8749691a25d0-kube-api-access-nsjk5\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.521586 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-catalog-content\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.522220 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-catalog-content\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.523522 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-utilities\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.542946 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsjk5\" (UniqueName: \"kubernetes.io/projected/387b9541-ea3f-443e-9af1-8749691a25d0-kube-api-access-nsjk5\") pod \"redhat-operators-sm2dl\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:30 crc kubenswrapper[4909]: I1126 09:24:30.708554 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:31 crc kubenswrapper[4909]: I1126 09:24:31.295180 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sm2dl"] Nov 26 09:24:32 crc kubenswrapper[4909]: I1126 09:24:32.123004 4909 generic.go:334] "Generic (PLEG): container finished" podID="387b9541-ea3f-443e-9af1-8749691a25d0" containerID="ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930" exitCode=0 Nov 26 09:24:32 crc kubenswrapper[4909]: I1126 09:24:32.123258 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm2dl" event={"ID":"387b9541-ea3f-443e-9af1-8749691a25d0","Type":"ContainerDied","Data":"ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930"} Nov 26 09:24:32 crc kubenswrapper[4909]: I1126 09:24:32.123289 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm2dl" event={"ID":"387b9541-ea3f-443e-9af1-8749691a25d0","Type":"ContainerStarted","Data":"039c36db87435d4280f1f843f0d9fdbd2ac315d0c58320963d6bad3d576a22af"} Nov 26 09:24:32 crc kubenswrapper[4909]: I1126 09:24:32.126577 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:24:34 crc kubenswrapper[4909]: I1126 09:24:34.147991 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm2dl" event={"ID":"387b9541-ea3f-443e-9af1-8749691a25d0","Type":"ContainerStarted","Data":"fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa"} Nov 26 09:24:36 crc kubenswrapper[4909]: I1126 09:24:36.173741 4909 generic.go:334] "Generic (PLEG): container finished" podID="387b9541-ea3f-443e-9af1-8749691a25d0" containerID="fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa" exitCode=0 Nov 26 09:24:36 crc kubenswrapper[4909]: I1126 09:24:36.173785 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm2dl" event={"ID":"387b9541-ea3f-443e-9af1-8749691a25d0","Type":"ContainerDied","Data":"fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa"} Nov 26 09:24:37 crc kubenswrapper[4909]: I1126 09:24:37.188513 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm2dl" event={"ID":"387b9541-ea3f-443e-9af1-8749691a25d0","Type":"ContainerStarted","Data":"034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d"} Nov 26 09:24:37 crc kubenswrapper[4909]: I1126 09:24:37.223058 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sm2dl" podStartSLOduration=2.73754162 podStartE2EDuration="7.223039754s" podCreationTimestamp="2025-11-26 09:24:30 +0000 UTC" firstStartedPulling="2025-11-26 09:24:32.12629829 +0000 UTC m=+8644.272509456" lastFinishedPulling="2025-11-26 09:24:36.611796394 +0000 UTC m=+8648.758007590" observedRunningTime="2025-11-26 09:24:37.218067447 +0000 UTC m=+8649.364278613" watchObservedRunningTime="2025-11-26 09:24:37.223039754 +0000 UTC m=+8649.369250920" Nov 26 09:24:37 crc kubenswrapper[4909]: I1126 09:24:37.499472 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:24:37 crc kubenswrapper[4909]: E1126 09:24:37.499747 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:24:40 crc kubenswrapper[4909]: I1126 09:24:40.228654 4909 generic.go:334] "Generic (PLEG): container finished" podID="57f07bac-a5ba-488c-91f2-e925ad366f26" containerID="b9d65c3b17c626023715c35d63aeac1f5e5d0b3b803f18e05bd1a1905ca33d6a" exitCode=0 Nov 26 09:24:40 crc kubenswrapper[4909]: I1126 09:24:40.228766 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" event={"ID":"57f07bac-a5ba-488c-91f2-e925ad366f26","Type":"ContainerDied","Data":"b9d65c3b17c626023715c35d63aeac1f5e5d0b3b803f18e05bd1a1905ca33d6a"} Nov 26 09:24:40 crc kubenswrapper[4909]: I1126 09:24:40.709118 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:40 crc kubenswrapper[4909]: I1126 09:24:40.709191 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.758194 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sm2dl" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="registry-server" probeResult="failure" output=< Nov 26 09:24:41 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 09:24:41 crc kubenswrapper[4909]: > Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.847729 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.902198 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-agent-neutron-config-0\") pod \"57f07bac-a5ba-488c-91f2-e925ad366f26\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.902324 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-inventory\") pod \"57f07bac-a5ba-488c-91f2-e925ad366f26\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.902457 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-combined-ca-bundle\") pod \"57f07bac-a5ba-488c-91f2-e925ad366f26\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.902498 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd75q\" (UniqueName: \"kubernetes.io/projected/57f07bac-a5ba-488c-91f2-e925ad366f26-kube-api-access-nd75q\") pod \"57f07bac-a5ba-488c-91f2-e925ad366f26\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.902519 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ssh-key\") pod \"57f07bac-a5ba-488c-91f2-e925ad366f26\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.902576 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ceph\") pod \"57f07bac-a5ba-488c-91f2-e925ad366f26\" (UID: \"57f07bac-a5ba-488c-91f2-e925ad366f26\") " Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.910569 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f07bac-a5ba-488c-91f2-e925ad366f26-kube-api-access-nd75q" (OuterVolumeSpecName: "kube-api-access-nd75q") pod "57f07bac-a5ba-488c-91f2-e925ad366f26" (UID: "57f07bac-a5ba-488c-91f2-e925ad366f26"). InnerVolumeSpecName "kube-api-access-nd75q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.913717 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "57f07bac-a5ba-488c-91f2-e925ad366f26" (UID: "57f07bac-a5ba-488c-91f2-e925ad366f26"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.917076 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ceph" (OuterVolumeSpecName: "ceph") pod "57f07bac-a5ba-488c-91f2-e925ad366f26" (UID: "57f07bac-a5ba-488c-91f2-e925ad366f26"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.942187 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "57f07bac-a5ba-488c-91f2-e925ad366f26" (UID: "57f07bac-a5ba-488c-91f2-e925ad366f26"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.946871 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "57f07bac-a5ba-488c-91f2-e925ad366f26" (UID: "57f07bac-a5ba-488c-91f2-e925ad366f26"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:24:41 crc kubenswrapper[4909]: I1126 09:24:41.951574 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-inventory" (OuterVolumeSpecName: "inventory") pod "57f07bac-a5ba-488c-91f2-e925ad366f26" (UID: "57f07bac-a5ba-488c-91f2-e925ad366f26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.006640 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd75q\" (UniqueName: \"kubernetes.io/projected/57f07bac-a5ba-488c-91f2-e925ad366f26-kube-api-access-nd75q\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.006707 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.006723 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.006762 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.006782 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.006804 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f07bac-a5ba-488c-91f2-e925ad366f26-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.252275 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" event={"ID":"57f07bac-a5ba-488c-91f2-e925ad366f26","Type":"ContainerDied","Data":"19648a1c5083c73f070e02b89e95abfa5b11f8376d7a4bf3411372a6eaf40c88"} Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.252322 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19648a1c5083c73f070e02b89e95abfa5b11f8376d7a4bf3411372a6eaf40c88" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.252325 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2jjv5" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.391638 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-flb4r"] Nov 26 09:24:42 crc kubenswrapper[4909]: E1126 09:24:42.392110 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f07bac-a5ba-488c-91f2-e925ad366f26" containerName="neutron-sriov-openstack-openstack-cell1" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.392128 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f07bac-a5ba-488c-91f2-e925ad366f26" containerName="neutron-sriov-openstack-openstack-cell1" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.392392 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f07bac-a5ba-488c-91f2-e925ad366f26" containerName="neutron-sriov-openstack-openstack-cell1" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.393196 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.397855 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.397910 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.397913 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.397854 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.397863 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.410658 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-flb4r"] Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.518186 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.518250 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.518269 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.518288 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.518364 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.518469 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhngc\" (UniqueName: \"kubernetes.io/projected/01fc94ad-49dd-4014-9145-beddf1a52403-kube-api-access-nhngc\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.621683 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhngc\" (UniqueName: \"kubernetes.io/projected/01fc94ad-49dd-4014-9145-beddf1a52403-kube-api-access-nhngc\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.621994 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.622256 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.622320 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.622350 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.622491 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.629584 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.629636 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.630046 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.635154 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.637297 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.640611 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhngc\" (UniqueName: \"kubernetes.io/projected/01fc94ad-49dd-4014-9145-beddf1a52403-kube-api-access-nhngc\") pod \"neutron-dhcp-openstack-openstack-cell1-flb4r\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:42 crc kubenswrapper[4909]: I1126 09:24:42.715809 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:24:43 crc kubenswrapper[4909]: I1126 09:24:43.304408 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-flb4r"] Nov 26 09:24:44 crc kubenswrapper[4909]: I1126 09:24:44.275236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" event={"ID":"01fc94ad-49dd-4014-9145-beddf1a52403","Type":"ContainerStarted","Data":"c319719ef49aa100c558f721a4806dc4fcb72d8126e01163b2ab950b3e7e2869"} Nov 26 09:24:45 crc kubenswrapper[4909]: I1126 09:24:45.290623 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" event={"ID":"01fc94ad-49dd-4014-9145-beddf1a52403","Type":"ContainerStarted","Data":"ea208f396dc204e9bc5ca7908f1263627c5a79a55fdef12c3577f005bd47ce03"} Nov 26 09:24:45 crc kubenswrapper[4909]: I1126 09:24:45.317318 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" podStartSLOduration=2.525116115 podStartE2EDuration="3.317291527s" podCreationTimestamp="2025-11-26 09:24:42 +0000 UTC" firstStartedPulling="2025-11-26 09:24:43.310887933 +0000 UTC m=+8655.457099099" lastFinishedPulling="2025-11-26 09:24:44.103063345 +0000 UTC m=+8656.249274511" observedRunningTime="2025-11-26 09:24:45.315353364 +0000 UTC m=+8657.461564540" watchObservedRunningTime="2025-11-26 09:24:45.317291527 +0000 UTC m=+8657.463502713" Nov 26 09:24:50 crc kubenswrapper[4909]: I1126 09:24:50.773965 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:50 crc kubenswrapper[4909]: I1126 09:24:50.831535 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:51 crc kubenswrapper[4909]: I1126 09:24:51.011719 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sm2dl"] Nov 26 09:24:52 crc kubenswrapper[4909]: I1126 09:24:52.397971 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sm2dl" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="registry-server" containerID="cri-o://034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d" gracePeriod=2 Nov 26 09:24:52 crc kubenswrapper[4909]: I1126 09:24:52.499649 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:24:52 crc kubenswrapper[4909]: E1126 09:24:52.499992 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:24:52 crc kubenswrapper[4909]: I1126 09:24:52.884938 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:52 crc kubenswrapper[4909]: I1126 09:24:52.999947 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-catalog-content\") pod \"387b9541-ea3f-443e-9af1-8749691a25d0\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.000015 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-utilities\") pod \"387b9541-ea3f-443e-9af1-8749691a25d0\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.000193 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsjk5\" (UniqueName: \"kubernetes.io/projected/387b9541-ea3f-443e-9af1-8749691a25d0-kube-api-access-nsjk5\") pod \"387b9541-ea3f-443e-9af1-8749691a25d0\" (UID: \"387b9541-ea3f-443e-9af1-8749691a25d0\") " Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.000728 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-utilities" (OuterVolumeSpecName: "utilities") pod "387b9541-ea3f-443e-9af1-8749691a25d0" (UID: "387b9541-ea3f-443e-9af1-8749691a25d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.006095 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387b9541-ea3f-443e-9af1-8749691a25d0-kube-api-access-nsjk5" (OuterVolumeSpecName: "kube-api-access-nsjk5") pod "387b9541-ea3f-443e-9af1-8749691a25d0" (UID: "387b9541-ea3f-443e-9af1-8749691a25d0"). InnerVolumeSpecName "kube-api-access-nsjk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.102876 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "387b9541-ea3f-443e-9af1-8749691a25d0" (UID: "387b9541-ea3f-443e-9af1-8749691a25d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.103162 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsjk5\" (UniqueName: \"kubernetes.io/projected/387b9541-ea3f-443e-9af1-8749691a25d0-kube-api-access-nsjk5\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.103184 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.103194 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387b9541-ea3f-443e-9af1-8749691a25d0-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.407987 4909 generic.go:334] "Generic (PLEG): container finished" podID="387b9541-ea3f-443e-9af1-8749691a25d0" containerID="034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d" exitCode=0 Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.408029 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm2dl" event={"ID":"387b9541-ea3f-443e-9af1-8749691a25d0","Type":"ContainerDied","Data":"034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d"} Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.408052 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm2dl" event={"ID":"387b9541-ea3f-443e-9af1-8749691a25d0","Type":"ContainerDied","Data":"039c36db87435d4280f1f843f0d9fdbd2ac315d0c58320963d6bad3d576a22af"} Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.408056 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm2dl" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.408067 4909 scope.go:117] "RemoveContainer" containerID="034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.435857 4909 scope.go:117] "RemoveContainer" containerID="fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.445573 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sm2dl"] Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.455352 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sm2dl"] Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.458092 4909 scope.go:117] "RemoveContainer" containerID="ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.501388 4909 scope.go:117] "RemoveContainer" containerID="034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d" Nov 26 09:24:53 crc kubenswrapper[4909]: E1126 09:24:53.502173 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d\": container with ID starting with 034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d not found: ID does not exist" containerID="034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.502214 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d"} err="failed to get container status \"034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d\": rpc error: code = NotFound desc = could not find container \"034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d\": container with ID starting with 034ececa7960c31a96d758c7a2ffe5fbbb2913ceb3d95b6ae5107ddc41291a3d not found: ID does not exist" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.502237 4909 scope.go:117] "RemoveContainer" containerID="fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa" Nov 26 09:24:53 crc kubenswrapper[4909]: E1126 09:24:53.502737 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa\": container with ID starting with fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa not found: ID does not exist" containerID="fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.502765 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa"} err="failed to get container status \"fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa\": rpc error: code = NotFound desc = could not find container \"fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa\": container with ID starting with fb190b3db6f5972a03041d4579afa7320f72ed217682a4730ba7190896bb53aa not found: ID does not exist" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.502780 4909 scope.go:117] "RemoveContainer" containerID="ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930" Nov 26 09:24:53 crc kubenswrapper[4909]: E1126 09:24:53.503076 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930\": container with ID starting with ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930 not found: ID does not exist" containerID="ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930" Nov 26 09:24:53 crc kubenswrapper[4909]: I1126 09:24:53.503108 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930"} err="failed to get container status \"ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930\": rpc error: code = NotFound desc = could not find container \"ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930\": container with ID starting with ce1111b37a118ad181345b9ebd38aaf6452f9f1c51bbaacaa222020a7f7f2930 not found: ID does not exist" Nov 26 09:24:54 crc kubenswrapper[4909]: I1126 09:24:54.511803 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" path="/var/lib/kubelet/pods/387b9541-ea3f-443e-9af1-8749691a25d0/volumes" Nov 26 09:25:04 crc kubenswrapper[4909]: I1126 09:25:04.499814 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:25:04 crc kubenswrapper[4909]: E1126 09:25:04.500655 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:25:16 crc kubenswrapper[4909]: I1126 09:25:16.498900 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:25:16 crc kubenswrapper[4909]: E1126 09:25:16.499717 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:25:31 crc kubenswrapper[4909]: I1126 09:25:31.499638 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:25:31 crc kubenswrapper[4909]: E1126 09:25:31.500550 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:25:43 crc kubenswrapper[4909]: I1126 09:25:43.498772 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:25:43 crc kubenswrapper[4909]: E1126 09:25:43.499488 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:25:57 crc kubenswrapper[4909]: I1126 09:25:57.500774 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:25:57 crc kubenswrapper[4909]: E1126 09:25:57.501708 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:26:09 crc kubenswrapper[4909]: I1126 09:26:09.499720 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:26:10 crc kubenswrapper[4909]: I1126 09:26:10.326521 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"f79c65b8b498e59164657f3aef6f9c00c5eb7f55e85d39244a3da61e2f374d62"} Nov 26 09:28:37 crc kubenswrapper[4909]: I1126 09:28:37.300502 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:28:37 crc kubenswrapper[4909]: I1126 09:28:37.301154 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:29:07 crc kubenswrapper[4909]: I1126 09:29:07.301192 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:29:07 crc kubenswrapper[4909]: I1126 09:29:07.301914 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.921794 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ndqz"] Nov 26 09:29:12 crc kubenswrapper[4909]: E1126 09:29:12.922921 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="extract-content" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.922938 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="extract-content" Nov 26 09:29:12 crc kubenswrapper[4909]: E1126 09:29:12.923043 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="registry-server" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.923053 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="registry-server" Nov 26 09:29:12 crc kubenswrapper[4909]: E1126 09:29:12.923121 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="extract-utilities" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.923130 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="extract-utilities" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.923434 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="387b9541-ea3f-443e-9af1-8749691a25d0" containerName="registry-server" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.925611 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.962734 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ndqz"] Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.966000 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-utilities\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.966138 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-catalog-content\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:12 crc kubenswrapper[4909]: I1126 09:29:12.966168 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75mb\" (UniqueName: \"kubernetes.io/projected/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-kube-api-access-f75mb\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.068692 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-utilities\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.068838 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-catalog-content\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.068864 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75mb\" (UniqueName: \"kubernetes.io/projected/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-kube-api-access-f75mb\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.069307 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-utilities\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.069349 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-catalog-content\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.091470 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75mb\" (UniqueName: \"kubernetes.io/projected/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-kube-api-access-f75mb\") pod \"redhat-marketplace-7ndqz\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.262788 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:13 crc kubenswrapper[4909]: I1126 09:29:13.724289 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ndqz"] Nov 26 09:29:14 crc kubenswrapper[4909]: I1126 09:29:14.456793 4909 generic.go:334] "Generic (PLEG): container finished" podID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerID="3bec96cd5b60b13bd6e10e1f926eab51710a85fbe32c5c622ee742585c39d538" exitCode=0 Nov 26 09:29:14 crc kubenswrapper[4909]: I1126 09:29:14.456846 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ndqz" event={"ID":"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4","Type":"ContainerDied","Data":"3bec96cd5b60b13bd6e10e1f926eab51710a85fbe32c5c622ee742585c39d538"} Nov 26 09:29:14 crc kubenswrapper[4909]: I1126 09:29:14.457270 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ndqz" event={"ID":"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4","Type":"ContainerStarted","Data":"9e6b6561910e6802a9dc49b36a07bb720c3f8410d371dbf86699c7935a6e8258"} Nov 26 09:29:17 crc kubenswrapper[4909]: I1126 09:29:17.491036 4909 generic.go:334] "Generic (PLEG): container finished" podID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerID="39346f8f321f8810030b59bc681da62279f68efcd7338db802f6b6644e7c0d7e" exitCode=0 Nov 26 09:29:17 crc kubenswrapper[4909]: I1126 09:29:17.491111 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ndqz" event={"ID":"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4","Type":"ContainerDied","Data":"39346f8f321f8810030b59bc681da62279f68efcd7338db802f6b6644e7c0d7e"} Nov 26 09:29:18 crc kubenswrapper[4909]: I1126 09:29:18.509936 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ndqz" event={"ID":"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4","Type":"ContainerStarted","Data":"6c1e435a5cbe113f733316e1f16aef9a208340fc4dd3d9e3c5c21ba3603c8b40"} Nov 26 09:29:18 crc kubenswrapper[4909]: I1126 09:29:18.524471 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ndqz" podStartSLOduration=2.917423818 podStartE2EDuration="6.524452299s" podCreationTimestamp="2025-11-26 09:29:12 +0000 UTC" firstStartedPulling="2025-11-26 09:29:14.45822856 +0000 UTC m=+8926.604439726" lastFinishedPulling="2025-11-26 09:29:18.065257041 +0000 UTC m=+8930.211468207" observedRunningTime="2025-11-26 09:29:18.523688078 +0000 UTC m=+8930.669899284" watchObservedRunningTime="2025-11-26 09:29:18.524452299 +0000 UTC m=+8930.670663465" Nov 26 09:29:23 crc kubenswrapper[4909]: I1126 09:29:23.264754 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:23 crc kubenswrapper[4909]: I1126 09:29:23.265766 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:23 crc kubenswrapper[4909]: I1126 09:29:23.337628 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:23 crc kubenswrapper[4909]: I1126 09:29:23.606752 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:23 crc kubenswrapper[4909]: I1126 09:29:23.666765 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ndqz"] Nov 26 09:29:25 crc kubenswrapper[4909]: I1126 09:29:25.588273 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ndqz" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="registry-server" containerID="cri-o://6c1e435a5cbe113f733316e1f16aef9a208340fc4dd3d9e3c5c21ba3603c8b40" gracePeriod=2 Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.604387 4909 generic.go:334] "Generic (PLEG): container finished" podID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerID="6c1e435a5cbe113f733316e1f16aef9a208340fc4dd3d9e3c5c21ba3603c8b40" exitCode=0 Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.604489 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ndqz" event={"ID":"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4","Type":"ContainerDied","Data":"6c1e435a5cbe113f733316e1f16aef9a208340fc4dd3d9e3c5c21ba3603c8b40"} Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.859326 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.892187 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-utilities\") pod \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.892310 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f75mb\" (UniqueName: \"kubernetes.io/projected/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-kube-api-access-f75mb\") pod \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.892393 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-catalog-content\") pod \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\" (UID: \"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4\") " Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.894301 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-utilities" (OuterVolumeSpecName: "utilities") pod "7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" (UID: "7f4ac4ba-3c71-43d1-92bf-fefff037b7e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.920242 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" (UID: "7f4ac4ba-3c71-43d1-92bf-fefff037b7e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.935187 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-kube-api-access-f75mb" (OuterVolumeSpecName: "kube-api-access-f75mb") pod "7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" (UID: "7f4ac4ba-3c71-43d1-92bf-fefff037b7e4"). InnerVolumeSpecName "kube-api-access-f75mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.994915 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f75mb\" (UniqueName: \"kubernetes.io/projected/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-kube-api-access-f75mb\") on node \"crc\" DevicePath \"\"" Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.994947 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:29:26 crc kubenswrapper[4909]: I1126 09:29:26.994956 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:29:27 crc kubenswrapper[4909]: I1126 09:29:27.621057 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ndqz" event={"ID":"7f4ac4ba-3c71-43d1-92bf-fefff037b7e4","Type":"ContainerDied","Data":"9e6b6561910e6802a9dc49b36a07bb720c3f8410d371dbf86699c7935a6e8258"} Nov 26 09:29:27 crc kubenswrapper[4909]: I1126 09:29:27.621185 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ndqz" Nov 26 09:29:27 crc kubenswrapper[4909]: I1126 09:29:27.621412 4909 scope.go:117] "RemoveContainer" containerID="6c1e435a5cbe113f733316e1f16aef9a208340fc4dd3d9e3c5c21ba3603c8b40" Nov 26 09:29:27 crc kubenswrapper[4909]: I1126 09:29:27.666136 4909 scope.go:117] "RemoveContainer" containerID="39346f8f321f8810030b59bc681da62279f68efcd7338db802f6b6644e7c0d7e" Nov 26 09:29:27 crc kubenswrapper[4909]: I1126 09:29:27.710034 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ndqz"] Nov 26 09:29:27 crc kubenswrapper[4909]: I1126 09:29:27.720964 4909 scope.go:117] "RemoveContainer" containerID="3bec96cd5b60b13bd6e10e1f926eab51710a85fbe32c5c622ee742585c39d538" Nov 26 09:29:27 crc kubenswrapper[4909]: I1126 09:29:27.725045 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ndqz"] Nov 26 09:29:28 crc kubenswrapper[4909]: I1126 09:29:28.518358 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" path="/var/lib/kubelet/pods/7f4ac4ba-3c71-43d1-92bf-fefff037b7e4/volumes" Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.301374 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.301995 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.302047 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.302939 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f79c65b8b498e59164657f3aef6f9c00c5eb7f55e85d39244a3da61e2f374d62"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.302999 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://f79c65b8b498e59164657f3aef6f9c00c5eb7f55e85d39244a3da61e2f374d62" gracePeriod=600 Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.756561 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="f79c65b8b498e59164657f3aef6f9c00c5eb7f55e85d39244a3da61e2f374d62" exitCode=0 Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.756623 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"f79c65b8b498e59164657f3aef6f9c00c5eb7f55e85d39244a3da61e2f374d62"} Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.756879 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d"} Nov 26 09:29:37 crc kubenswrapper[4909]: I1126 09:29:37.756902 4909 scope.go:117] "RemoveContainer" containerID="2c8dfdf78f354b32ffe9a457b18c96a6dac9df3119fa873bc3f78523e6c7cc3d" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.178581 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh"] Nov 26 09:30:00 crc kubenswrapper[4909]: E1126 09:30:00.179561 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="registry-server" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.179572 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="registry-server" Nov 26 09:30:00 crc kubenswrapper[4909]: E1126 09:30:00.179622 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="extract-utilities" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.179628 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="extract-utilities" Nov 26 09:30:00 crc kubenswrapper[4909]: E1126 09:30:00.179665 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="extract-content" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.179672 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="extract-content" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.179897 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4ac4ba-3c71-43d1-92bf-fefff037b7e4" containerName="registry-server" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.180673 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.183275 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.183481 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.217831 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh"] Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.299000 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-secret-volume\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.299088 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-config-volume\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.299833 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrscf\" (UniqueName: \"kubernetes.io/projected/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-kube-api-access-lrscf\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.401470 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-secret-volume\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.402487 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-config-volume\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.402863 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrscf\" (UniqueName: \"kubernetes.io/projected/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-kube-api-access-lrscf\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.403272 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-config-volume\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.407243 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-secret-volume\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.422253 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrscf\" (UniqueName: \"kubernetes.io/projected/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-kube-api-access-lrscf\") pod \"collect-profiles-29402490-6wljh\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:00 crc kubenswrapper[4909]: I1126 09:30:00.518073 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:01 crc kubenswrapper[4909]: I1126 09:30:01.014237 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh"] Nov 26 09:30:01 crc kubenswrapper[4909]: I1126 09:30:01.054414 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" event={"ID":"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca","Type":"ContainerStarted","Data":"be24ffa57696ecd0f6d26132c39099899de3df90c246e6ecb32c41b07031b289"} Nov 26 09:30:02 crc kubenswrapper[4909]: I1126 09:30:02.068413 4909 generic.go:334] "Generic (PLEG): container finished" podID="9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca" containerID="021337c93df7d007eb2150d440237412c68ac9f64852791488f44c3747695162" exitCode=0 Nov 26 09:30:02 crc kubenswrapper[4909]: I1126 09:30:02.068475 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" event={"ID":"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca","Type":"ContainerDied","Data":"021337c93df7d007eb2150d440237412c68ac9f64852791488f44c3747695162"} Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.519699 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.672774 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-config-volume\") pod \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.672861 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrscf\" (UniqueName: \"kubernetes.io/projected/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-kube-api-access-lrscf\") pod \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.672926 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-secret-volume\") pod \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\" (UID: \"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca\") " Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.673929 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca" (UID: "9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.682840 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-kube-api-access-lrscf" (OuterVolumeSpecName: "kube-api-access-lrscf") pod "9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca" (UID: "9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca"). InnerVolumeSpecName "kube-api-access-lrscf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.691782 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca" (UID: "9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.776839 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.776917 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrscf\" (UniqueName: \"kubernetes.io/projected/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-kube-api-access-lrscf\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:03 crc kubenswrapper[4909]: I1126 09:30:03.776939 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:04 crc kubenswrapper[4909]: I1126 09:30:04.097589 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" event={"ID":"9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca","Type":"ContainerDied","Data":"be24ffa57696ecd0f6d26132c39099899de3df90c246e6ecb32c41b07031b289"} Nov 26 09:30:04 crc kubenswrapper[4909]: I1126 09:30:04.097658 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be24ffa57696ecd0f6d26132c39099899de3df90c246e6ecb32c41b07031b289" Nov 26 09:30:04 crc kubenswrapper[4909]: I1126 09:30:04.097689 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402490-6wljh" Nov 26 09:30:04 crc kubenswrapper[4909]: I1126 09:30:04.614789 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l"] Nov 26 09:30:04 crc kubenswrapper[4909]: I1126 09:30:04.625705 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402445-d498l"] Nov 26 09:30:06 crc kubenswrapper[4909]: I1126 09:30:06.522172 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a8bc06-8773-4a26-a767-7f4dbc4a6643" path="/var/lib/kubelet/pods/76a8bc06-8773-4a26-a767-7f4dbc4a6643/volumes" Nov 26 09:30:47 crc kubenswrapper[4909]: I1126 09:30:47.413039 4909 scope.go:117] "RemoveContainer" containerID="45de15107549009e775b1325ad9fe7d5522563e2ceb663804c45d9eeec53674b" Nov 26 09:30:52 crc kubenswrapper[4909]: I1126 09:30:52.653955 4909 generic.go:334] "Generic (PLEG): container finished" podID="01fc94ad-49dd-4014-9145-beddf1a52403" containerID="ea208f396dc204e9bc5ca7908f1263627c5a79a55fdef12c3577f005bd47ce03" exitCode=0 Nov 26 09:30:52 crc kubenswrapper[4909]: I1126 09:30:52.654097 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" event={"ID":"01fc94ad-49dd-4014-9145-beddf1a52403","Type":"ContainerDied","Data":"ea208f396dc204e9bc5ca7908f1263627c5a79a55fdef12c3577f005bd47ce03"} Nov 26 09:30:54 crc kubenswrapper[4909]: I1126 09:30:54.680724 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" event={"ID":"01fc94ad-49dd-4014-9145-beddf1a52403","Type":"ContainerDied","Data":"c319719ef49aa100c558f721a4806dc4fcb72d8126e01163b2ab950b3e7e2869"} Nov 26 09:30:54 crc kubenswrapper[4909]: I1126 09:30:54.681265 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c319719ef49aa100c558f721a4806dc4fcb72d8126e01163b2ab950b3e7e2869" Nov 26 09:30:54 crc kubenswrapper[4909]: I1126 09:30:54.819359 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.010500 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-combined-ca-bundle\") pod \"01fc94ad-49dd-4014-9145-beddf1a52403\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.010665 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ssh-key\") pod \"01fc94ad-49dd-4014-9145-beddf1a52403\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.010773 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ceph\") pod \"01fc94ad-49dd-4014-9145-beddf1a52403\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.011473 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-agent-neutron-config-0\") pod \"01fc94ad-49dd-4014-9145-beddf1a52403\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.011851 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhngc\" (UniqueName: \"kubernetes.io/projected/01fc94ad-49dd-4014-9145-beddf1a52403-kube-api-access-nhngc\") pod \"01fc94ad-49dd-4014-9145-beddf1a52403\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.011972 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-inventory\") pod \"01fc94ad-49dd-4014-9145-beddf1a52403\" (UID: \"01fc94ad-49dd-4014-9145-beddf1a52403\") " Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.016798 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01fc94ad-49dd-4014-9145-beddf1a52403-kube-api-access-nhngc" (OuterVolumeSpecName: "kube-api-access-nhngc") pod "01fc94ad-49dd-4014-9145-beddf1a52403" (UID: "01fc94ad-49dd-4014-9145-beddf1a52403"). InnerVolumeSpecName "kube-api-access-nhngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.017501 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "01fc94ad-49dd-4014-9145-beddf1a52403" (UID: "01fc94ad-49dd-4014-9145-beddf1a52403"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.028869 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ceph" (OuterVolumeSpecName: "ceph") pod "01fc94ad-49dd-4014-9145-beddf1a52403" (UID: "01fc94ad-49dd-4014-9145-beddf1a52403"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.041551 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "01fc94ad-49dd-4014-9145-beddf1a52403" (UID: "01fc94ad-49dd-4014-9145-beddf1a52403"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.042378 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-inventory" (OuterVolumeSpecName: "inventory") pod "01fc94ad-49dd-4014-9145-beddf1a52403" (UID: "01fc94ad-49dd-4014-9145-beddf1a52403"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.059431 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01fc94ad-49dd-4014-9145-beddf1a52403" (UID: "01fc94ad-49dd-4014-9145-beddf1a52403"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.115644 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.115719 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.115749 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.115772 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.115799 4909 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01fc94ad-49dd-4014-9145-beddf1a52403-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.115827 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhngc\" (UniqueName: \"kubernetes.io/projected/01fc94ad-49dd-4014-9145-beddf1a52403-kube-api-access-nhngc\") on node \"crc\" DevicePath \"\"" Nov 26 09:30:55 crc kubenswrapper[4909]: I1126 09:30:55.688633 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-flb4r" Nov 26 09:31:21 crc kubenswrapper[4909]: I1126 09:31:21.404024 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 09:31:21 crc kubenswrapper[4909]: I1126 09:31:21.404724 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9bd6851d-7422-4701-a9da-1ab5ca8ce7df" containerName="nova-cell0-conductor-conductor" containerID="cri-o://e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63" gracePeriod=30 Nov 26 09:31:21 crc kubenswrapper[4909]: I1126 09:31:21.430582 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 09:31:21 crc kubenswrapper[4909]: I1126 09:31:21.430813 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="e5855f59-14c5-493a-ad57-d8a9cea9a517" containerName="nova-cell1-conductor-conductor" containerID="cri-o://ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8" gracePeriod=30 Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.290391 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff"] Nov 26 09:31:22 crc kubenswrapper[4909]: E1126 09:31:22.291055 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01fc94ad-49dd-4014-9145-beddf1a52403" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.291072 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="01fc94ad-49dd-4014-9145-beddf1a52403" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 26 09:31:22 crc kubenswrapper[4909]: E1126 09:31:22.291122 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca" containerName="collect-profiles" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.291129 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca" containerName="collect-profiles" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.291358 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9ee2d7-d67a-4746-9bc6-0d7c5f931dca" containerName="collect-profiles" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.291383 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="01fc94ad-49dd-4014-9145-beddf1a52403" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.292157 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.296838 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.297073 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.297293 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.297399 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.297507 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-rljsk" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.297841 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.297905 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.322195 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff"] Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.372290 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.372565 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-log" containerID="cri-o://a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82" gracePeriod=30 Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.372750 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-api" containerID="cri-o://fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e" gracePeriod=30 Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.385860 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.386061 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" containerName="nova-scheduler-scheduler" containerID="cri-o://6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" gracePeriod=30 Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.413762 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.413871 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.413893 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7v8\" (UniqueName: \"kubernetes.io/projected/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-kube-api-access-ms7v8\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.413918 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.413960 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.413990 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.414012 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.414033 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.414050 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.414075 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.414099 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.422152 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.422387 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-log" containerID="cri-o://d54e82eef0a7befc7107ab598f02c01e2f8dc84ceccec5f8a8f692bd196a22aa" gracePeriod=30 Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.422490 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-metadata" containerID="cri-o://29c3c076a9f890b0e35d179ff254261051f48c2c9f224b18f80d2267f5a2f21b" gracePeriod=30 Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.515457 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.515865 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516023 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516238 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516367 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7v8\" (UniqueName: \"kubernetes.io/projected/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-kube-api-access-ms7v8\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516455 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516555 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516662 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516770 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516907 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.516997 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.517264 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.517796 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.521830 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.522388 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.522414 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.523155 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.523683 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.523817 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.525189 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.525982 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.542472 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7v8\" (UniqueName: \"kubernetes.io/projected/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-kube-api-access-ms7v8\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:22 crc kubenswrapper[4909]: I1126 09:31:22.625859 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.055695 4909 generic.go:334] "Generic (PLEG): container finished" podID="da8256fc-4601-411d-9bfd-c86c73421537" containerID="a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82" exitCode=143 Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.055768 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da8256fc-4601-411d-9bfd-c86c73421537","Type":"ContainerDied","Data":"a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82"} Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.057668 4909 generic.go:334] "Generic (PLEG): container finished" podID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerID="d54e82eef0a7befc7107ab598f02c01e2f8dc84ceccec5f8a8f692bd196a22aa" exitCode=143 Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.057709 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa","Type":"ContainerDied","Data":"d54e82eef0a7befc7107ab598f02c01e2f8dc84ceccec5f8a8f692bd196a22aa"} Nov 26 09:31:23 crc kubenswrapper[4909]: E1126 09:31:23.217988 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 09:31:23 crc kubenswrapper[4909]: E1126 09:31:23.219434 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 09:31:23 crc kubenswrapper[4909]: E1126 09:31:23.223737 4909 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 09:31:23 crc kubenswrapper[4909]: E1126 09:31:23.223823 4909 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" containerName="nova-scheduler-scheduler" Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.254242 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.263096 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff"] Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.772521 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.950068 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data\") pod \"e5855f59-14c5-493a-ad57-d8a9cea9a517\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.950191 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf55k\" (UniqueName: \"kubernetes.io/projected/e5855f59-14c5-493a-ad57-d8a9cea9a517-kube-api-access-cf55k\") pod \"e5855f59-14c5-493a-ad57-d8a9cea9a517\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.950365 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-combined-ca-bundle\") pod \"e5855f59-14c5-493a-ad57-d8a9cea9a517\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.955634 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5855f59-14c5-493a-ad57-d8a9cea9a517-kube-api-access-cf55k" (OuterVolumeSpecName: "kube-api-access-cf55k") pod "e5855f59-14c5-493a-ad57-d8a9cea9a517" (UID: "e5855f59-14c5-493a-ad57-d8a9cea9a517"). InnerVolumeSpecName "kube-api-access-cf55k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:31:23 crc kubenswrapper[4909]: E1126 09:31:23.982874 4909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data podName:e5855f59-14c5-493a-ad57-d8a9cea9a517 nodeName:}" failed. No retries permitted until 2025-11-26 09:31:24.482847141 +0000 UTC m=+9056.629058307 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data") pod "e5855f59-14c5-493a-ad57-d8a9cea9a517" (UID: "e5855f59-14c5-493a-ad57-d8a9cea9a517") : error deleting /var/lib/kubelet/pods/e5855f59-14c5-493a-ad57-d8a9cea9a517/volume-subpaths: remove /var/lib/kubelet/pods/e5855f59-14c5-493a-ad57-d8a9cea9a517/volume-subpaths: no such file or directory Nov 26 09:31:23 crc kubenswrapper[4909]: I1126 09:31:23.990333 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5855f59-14c5-493a-ad57-d8a9cea9a517" (UID: "e5855f59-14c5-493a-ad57-d8a9cea9a517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.055348 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.056196 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf55k\" (UniqueName: \"kubernetes.io/projected/e5855f59-14c5-493a-ad57-d8a9cea9a517-kube-api-access-cf55k\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.070839 4909 generic.go:334] "Generic (PLEG): container finished" podID="e5855f59-14c5-493a-ad57-d8a9cea9a517" containerID="ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8" exitCode=0 Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.070919 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5855f59-14c5-493a-ad57-d8a9cea9a517","Type":"ContainerDied","Data":"ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8"} Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.070949 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5855f59-14c5-493a-ad57-d8a9cea9a517","Type":"ContainerDied","Data":"149035772dd9d469824e1b749a30eedfcfd413515f7f88f053bbd4a98714b585"} Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.070952 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.070969 4909 scope.go:117] "RemoveContainer" containerID="ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.073340 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" event={"ID":"b787ec2d-08c2-4282-9a94-fe5dc36fb14c","Type":"ContainerStarted","Data":"44b5032ffc6eed022cdd8bb46473d2f436650c3cdf5c46755bc6d89c685c3840"} Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.099482 4909 scope.go:117] "RemoveContainer" containerID="ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8" Nov 26 09:31:24 crc kubenswrapper[4909]: E1126 09:31:24.100165 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8\": container with ID starting with ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8 not found: ID does not exist" containerID="ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.100210 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8"} err="failed to get container status \"ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8\": rpc error: code = NotFound desc = could not find container \"ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8\": container with ID starting with ac517e458c0b4b9528eee41378169959cbfba4d74d3e7b3e90eec4b827ac95d8 not found: ID does not exist" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.566945 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data\") pod \"e5855f59-14c5-493a-ad57-d8a9cea9a517\" (UID: \"e5855f59-14c5-493a-ad57-d8a9cea9a517\") " Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.572477 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data" (OuterVolumeSpecName: "config-data") pod "e5855f59-14c5-493a-ad57-d8a9cea9a517" (UID: "e5855f59-14c5-493a-ad57-d8a9cea9a517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.670356 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5855f59-14c5-493a-ad57-d8a9cea9a517-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.715210 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.733653 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.745708 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 09:31:24 crc kubenswrapper[4909]: E1126 09:31:24.746504 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5855f59-14c5-493a-ad57-d8a9cea9a517" containerName="nova-cell1-conductor-conductor" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.746527 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5855f59-14c5-493a-ad57-d8a9cea9a517" containerName="nova-cell1-conductor-conductor" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.746783 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5855f59-14c5-493a-ad57-d8a9cea9a517" containerName="nova-cell1-conductor-conductor" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.750651 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.757617 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.761065 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.877457 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4188ae86-25b4-429a-a042-906a5b04ea81-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.877615 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4188ae86-25b4-429a-a042-906a5b04ea81-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.878217 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6xhj\" (UniqueName: \"kubernetes.io/projected/4188ae86-25b4-429a-a042-906a5b04ea81-kube-api-access-h6xhj\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.980209 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6xhj\" (UniqueName: \"kubernetes.io/projected/4188ae86-25b4-429a-a042-906a5b04ea81-kube-api-access-h6xhj\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.980335 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4188ae86-25b4-429a-a042-906a5b04ea81-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.981243 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4188ae86-25b4-429a-a042-906a5b04ea81-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.986267 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4188ae86-25b4-429a-a042-906a5b04ea81-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.986344 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4188ae86-25b4-429a-a042-906a5b04ea81-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:24 crc kubenswrapper[4909]: I1126 09:31:24.999303 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6xhj\" (UniqueName: \"kubernetes.io/projected/4188ae86-25b4-429a-a042-906a5b04ea81-kube-api-access-h6xhj\") pod \"nova-cell1-conductor-0\" (UID: \"4188ae86-25b4-429a-a042-906a5b04ea81\") " pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.078248 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.089889 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" event={"ID":"b787ec2d-08c2-4282-9a94-fe5dc36fb14c","Type":"ContainerStarted","Data":"b62a0ae8de69bbc9f7d7aff8c4a946c0714710f92b1ad0e869e3fc91ce158681"} Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.112831 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" podStartSLOduration=2.541707726 podStartE2EDuration="3.112815487s" podCreationTimestamp="2025-11-26 09:31:22 +0000 UTC" firstStartedPulling="2025-11-26 09:31:23.253987592 +0000 UTC m=+9055.400198778" lastFinishedPulling="2025-11-26 09:31:23.825095373 +0000 UTC m=+9055.971306539" observedRunningTime="2025-11-26 09:31:25.110261267 +0000 UTC m=+9057.256472433" watchObservedRunningTime="2025-11-26 09:31:25.112815487 +0000 UTC m=+9057.259026653" Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.601492 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.807953 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.908336 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-config-data\") pod \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.908752 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktdzf\" (UniqueName: \"kubernetes.io/projected/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-kube-api-access-ktdzf\") pod \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.908801 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-combined-ca-bundle\") pod \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\" (UID: \"9bd6851d-7422-4701-a9da-1ab5ca8ce7df\") " Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.926638 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-kube-api-access-ktdzf" (OuterVolumeSpecName: "kube-api-access-ktdzf") pod "9bd6851d-7422-4701-a9da-1ab5ca8ce7df" (UID: "9bd6851d-7422-4701-a9da-1ab5ca8ce7df"). InnerVolumeSpecName "kube-api-access-ktdzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.951517 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.97:8775/\": read tcp 10.217.0.2:56246->10.217.1.97:8775: read: connection reset by peer" Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.951707 4909 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.97:8775/\": read tcp 10.217.0.2:56234->10.217.1.97:8775: read: connection reset by peer" Nov 26 09:31:25 crc kubenswrapper[4909]: I1126 09:31:25.976768 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-config-data" (OuterVolumeSpecName: "config-data") pod "9bd6851d-7422-4701-a9da-1ab5ca8ce7df" (UID: "9bd6851d-7422-4701-a9da-1ab5ca8ce7df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.012011 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktdzf\" (UniqueName: \"kubernetes.io/projected/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-kube-api-access-ktdzf\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.012041 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.012717 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bd6851d-7422-4701-a9da-1ab5ca8ce7df" (UID: "9bd6851d-7422-4701-a9da-1ab5ca8ce7df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.065790 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.114497 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd6851d-7422-4701-a9da-1ab5ca8ce7df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.133536 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4188ae86-25b4-429a-a042-906a5b04ea81","Type":"ContainerStarted","Data":"28a2bdebe9c6baaa78d4286f22984d94ab46ab06ffd29236532225af7f0135a1"} Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.143882 4909 generic.go:334] "Generic (PLEG): container finished" podID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerID="29c3c076a9f890b0e35d179ff254261051f48c2c9f224b18f80d2267f5a2f21b" exitCode=0 Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.143976 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa","Type":"ContainerDied","Data":"29c3c076a9f890b0e35d179ff254261051f48c2c9f224b18f80d2267f5a2f21b"} Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.158541 4909 generic.go:334] "Generic (PLEG): container finished" podID="da8256fc-4601-411d-9bfd-c86c73421537" containerID="fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e" exitCode=0 Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.158639 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da8256fc-4601-411d-9bfd-c86c73421537","Type":"ContainerDied","Data":"fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e"} Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.158684 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da8256fc-4601-411d-9bfd-c86c73421537","Type":"ContainerDied","Data":"f7a84af66dd85cd5e88ecf13cb047cb577a1eb2933060c4d4da428e1f11b9d90"} Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.158705 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.158725 4909 scope.go:117] "RemoveContainer" containerID="fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.162300 4909 generic.go:334] "Generic (PLEG): container finished" podID="9bd6851d-7422-4701-a9da-1ab5ca8ce7df" containerID="e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63" exitCode=0 Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.163392 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.164928 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bd6851d-7422-4701-a9da-1ab5ca8ce7df","Type":"ContainerDied","Data":"e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63"} Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.164961 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bd6851d-7422-4701-a9da-1ab5ca8ce7df","Type":"ContainerDied","Data":"9a38596b015427ea967312dbbe23de7533a07db55b58ce14c5703c883df1dc90"} Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.186271 4909 scope.go:117] "RemoveContainer" containerID="a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.217220 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da8256fc-4601-411d-9bfd-c86c73421537-logs\") pod \"da8256fc-4601-411d-9bfd-c86c73421537\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.217640 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-config-data\") pod \"da8256fc-4601-411d-9bfd-c86c73421537\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.217697 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxc74\" (UniqueName: \"kubernetes.io/projected/da8256fc-4601-411d-9bfd-c86c73421537-kube-api-access-qxc74\") pod \"da8256fc-4601-411d-9bfd-c86c73421537\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.217846 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-combined-ca-bundle\") pod \"da8256fc-4601-411d-9bfd-c86c73421537\" (UID: \"da8256fc-4601-411d-9bfd-c86c73421537\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.222060 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8256fc-4601-411d-9bfd-c86c73421537-logs" (OuterVolumeSpecName: "logs") pod "da8256fc-4601-411d-9bfd-c86c73421537" (UID: "da8256fc-4601-411d-9bfd-c86c73421537"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.226240 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8256fc-4601-411d-9bfd-c86c73421537-kube-api-access-qxc74" (OuterVolumeSpecName: "kube-api-access-qxc74") pod "da8256fc-4601-411d-9bfd-c86c73421537" (UID: "da8256fc-4601-411d-9bfd-c86c73421537"). InnerVolumeSpecName "kube-api-access-qxc74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.258355 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da8256fc-4601-411d-9bfd-c86c73421537" (UID: "da8256fc-4601-411d-9bfd-c86c73421537"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.266330 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-config-data" (OuterVolumeSpecName: "config-data") pod "da8256fc-4601-411d-9bfd-c86c73421537" (UID: "da8256fc-4601-411d-9bfd-c86c73421537"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.276768 4909 scope.go:117] "RemoveContainer" containerID="fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e" Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.277956 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e\": container with ID starting with fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e not found: ID does not exist" containerID="fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.278009 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e"} err="failed to get container status \"fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e\": rpc error: code = NotFound desc = could not find container \"fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e\": container with ID starting with fe67c28ed468e52ddd28e17544c766ae19694c6b01b26d15fde21638c15fe17e not found: ID does not exist" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.278030 4909 scope.go:117] "RemoveContainer" containerID="a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82" Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.278734 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82\": container with ID starting with a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82 not found: ID does not exist" containerID="a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.278751 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82"} err="failed to get container status \"a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82\": rpc error: code = NotFound desc = could not find container \"a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82\": container with ID starting with a9cae8a07d092d5a2e677f6939c214970f987262500175df0da25eb10b23ad82 not found: ID does not exist" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.278763 4909 scope.go:117] "RemoveContainer" containerID="e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.285646 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.305072 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.321953 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.321984 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da8256fc-4601-411d-9bfd-c86c73421537-logs\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.321993 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da8256fc-4601-411d-9bfd-c86c73421537-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.322002 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxc74\" (UniqueName: \"kubernetes.io/projected/da8256fc-4601-411d-9bfd-c86c73421537-kube-api-access-qxc74\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.344761 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.345382 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-log" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.345400 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-log" Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.345439 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-api" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.345446 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-api" Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.345556 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd6851d-7422-4701-a9da-1ab5ca8ce7df" containerName="nova-cell0-conductor-conductor" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.345564 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd6851d-7422-4701-a9da-1ab5ca8ce7df" containerName="nova-cell0-conductor-conductor" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.345795 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd6851d-7422-4701-a9da-1ab5ca8ce7df" containerName="nova-cell0-conductor-conductor" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.345820 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-log" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.345832 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8256fc-4601-411d-9bfd-c86c73421537" containerName="nova-api-api" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.346577 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.352055 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.367853 4909 scope.go:117] "RemoveContainer" containerID="e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.370378 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.370577 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63\": container with ID starting with e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63 not found: ID does not exist" containerID="e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.370647 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63"} err="failed to get container status \"e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63\": rpc error: code = NotFound desc = could not find container \"e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63\": container with ID starting with e6ad1536da8056875d4939716fd8397b92db281f1bd3319754b74bacbf6dfd63 not found: ID does not exist" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.453984 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.525669 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6fca93-019e-4019-a170-fc4bd6c68530-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.525919 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6fca93-019e-4019-a170-fc4bd6c68530-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.525958 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdz6x\" (UniqueName: \"kubernetes.io/projected/7a6fca93-019e-4019-a170-fc4bd6c68530-kube-api-access-mdz6x\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.534107 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd6851d-7422-4701-a9da-1ab5ca8ce7df" path="/var/lib/kubelet/pods/9bd6851d-7422-4701-a9da-1ab5ca8ce7df/volumes" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.534649 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5855f59-14c5-493a-ad57-d8a9cea9a517" path="/var/lib/kubelet/pods/e5855f59-14c5-493a-ad57-d8a9cea9a517/volumes" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.607871 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.607908 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.607930 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.608414 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-log" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.608429 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-log" Nov 26 09:31:26 crc kubenswrapper[4909]: E1126 09:31:26.608488 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-metadata" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.608497 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-metadata" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.608864 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-log" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.608884 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" containerName="nova-metadata-metadata" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.610257 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.610359 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.617006 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.627090 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-logs\") pod \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.627207 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-combined-ca-bundle\") pod \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.627267 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc7q2\" (UniqueName: \"kubernetes.io/projected/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-kube-api-access-dc7q2\") pod \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.627311 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-config-data\") pod \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\" (UID: \"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa\") " Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.627791 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-logs" (OuterVolumeSpecName: "logs") pod "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" (UID: "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.628135 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6fca93-019e-4019-a170-fc4bd6c68530-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.628176 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6fca93-019e-4019-a170-fc4bd6c68530-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.628254 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdz6x\" (UniqueName: \"kubernetes.io/projected/7a6fca93-019e-4019-a170-fc4bd6c68530-kube-api-access-mdz6x\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.628338 4909 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-logs\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.636537 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6fca93-019e-4019-a170-fc4bd6c68530-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.639767 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6fca93-019e-4019-a170-fc4bd6c68530-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.647928 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-kube-api-access-dc7q2" (OuterVolumeSpecName: "kube-api-access-dc7q2") pod "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" (UID: "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa"). InnerVolumeSpecName "kube-api-access-dc7q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.651402 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdz6x\" (UniqueName: \"kubernetes.io/projected/7a6fca93-019e-4019-a170-fc4bd6c68530-kube-api-access-mdz6x\") pod \"nova-cell0-conductor-0\" (UID: \"7a6fca93-019e-4019-a170-fc4bd6c68530\") " pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.660238 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" (UID: "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.662728 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-config-data" (OuterVolumeSpecName: "config-data") pod "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" (UID: "a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.679810 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.730614 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj7cc\" (UniqueName: \"kubernetes.io/projected/c0c9c1db-492a-44cb-9eb2-756ddcd00876-kube-api-access-lj7cc\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.730670 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c9c1db-492a-44cb-9eb2-756ddcd00876-logs\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.730773 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c9c1db-492a-44cb-9eb2-756ddcd00876-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.730795 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c9c1db-492a-44cb-9eb2-756ddcd00876-config-data\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.730847 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.730859 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc7q2\" (UniqueName: \"kubernetes.io/projected/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-kube-api-access-dc7q2\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.730868 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.833008 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj7cc\" (UniqueName: \"kubernetes.io/projected/c0c9c1db-492a-44cb-9eb2-756ddcd00876-kube-api-access-lj7cc\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.833073 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c9c1db-492a-44cb-9eb2-756ddcd00876-logs\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.833185 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c9c1db-492a-44cb-9eb2-756ddcd00876-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.833212 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c9c1db-492a-44cb-9eb2-756ddcd00876-config-data\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.834362 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c9c1db-492a-44cb-9eb2-756ddcd00876-logs\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.839080 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c9c1db-492a-44cb-9eb2-756ddcd00876-config-data\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.850237 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c9c1db-492a-44cb-9eb2-756ddcd00876-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:26 crc kubenswrapper[4909]: I1126 09:31:26.859222 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj7cc\" (UniqueName: \"kubernetes.io/projected/c0c9c1db-492a-44cb-9eb2-756ddcd00876-kube-api-access-lj7cc\") pod \"nova-api-0\" (UID: \"c0c9c1db-492a-44cb-9eb2-756ddcd00876\") " pod="openstack/nova-api-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.149285 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.178835 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa","Type":"ContainerDied","Data":"6581367885cf086b1f5cc557bb0f963a1006754deba33ec0f424bbbe2afbfb05"} Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.178894 4909 scope.go:117] "RemoveContainer" containerID="29c3c076a9f890b0e35d179ff254261051f48c2c9f224b18f80d2267f5a2f21b" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.178896 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.191736 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4188ae86-25b4-429a-a042-906a5b04ea81","Type":"ContainerStarted","Data":"7395bc5fcc7e2f426b3fa8d2a294d04945f7e8022cbf8bef5b4739a63cc93662"} Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.191977 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.221285 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.221260934 podStartE2EDuration="3.221260934s" podCreationTimestamp="2025-11-26 09:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 09:31:27.208977147 +0000 UTC m=+9059.355188323" watchObservedRunningTime="2025-11-26 09:31:27.221260934 +0000 UTC m=+9059.367472100" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.231582 4909 scope.go:117] "RemoveContainer" containerID="d54e82eef0a7befc7107ab598f02c01e2f8dc84ceccec5f8a8f692bd196a22aa" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.254058 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.270397 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.281291 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.294144 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.299628 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.303359 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.306616 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.450783 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzrv\" (UniqueName: \"kubernetes.io/projected/cbb5caa6-8215-4021-91b6-1d27967f571d-kube-api-access-dhzrv\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.450838 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb5caa6-8215-4021-91b6-1d27967f571d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.450858 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb5caa6-8215-4021-91b6-1d27967f571d-config-data\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.451396 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb5caa6-8215-4021-91b6-1d27967f571d-logs\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.553389 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb5caa6-8215-4021-91b6-1d27967f571d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.553430 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb5caa6-8215-4021-91b6-1d27967f571d-config-data\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.553624 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb5caa6-8215-4021-91b6-1d27967f571d-logs\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.553669 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhzrv\" (UniqueName: \"kubernetes.io/projected/cbb5caa6-8215-4021-91b6-1d27967f571d-kube-api-access-dhzrv\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.554650 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb5caa6-8215-4021-91b6-1d27967f571d-logs\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.559702 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb5caa6-8215-4021-91b6-1d27967f571d-config-data\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.561696 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb5caa6-8215-4021-91b6-1d27967f571d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.570048 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhzrv\" (UniqueName: \"kubernetes.io/projected/cbb5caa6-8215-4021-91b6-1d27967f571d-kube-api-access-dhzrv\") pod \"nova-metadata-0\" (UID: \"cbb5caa6-8215-4021-91b6-1d27967f571d\") " pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.665776 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 09:31:27 crc kubenswrapper[4909]: I1126 09:31:27.721639 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.126190 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.218476 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a6fca93-019e-4019-a170-fc4bd6c68530","Type":"ContainerStarted","Data":"1f7ea74cfb1cead22b957600c961ce4123befe1e1aead0ba26359f44121c6847"} Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.219045 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.219069 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a6fca93-019e-4019-a170-fc4bd6c68530","Type":"ContainerStarted","Data":"39ddc9c6796ef45307c014cbcb8c0fa003e32b5533aa013ed38bf91170544001"} Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.222893 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0c9c1db-492a-44cb-9eb2-756ddcd00876","Type":"ContainerStarted","Data":"987a725c00aee61c29353e61a2ab616c035eb23e71576eb03634ae01ebab7d1b"} Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.222958 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0c9c1db-492a-44cb-9eb2-756ddcd00876","Type":"ContainerStarted","Data":"6f0a9a4a02aa7b3e29c0618469d3f3dbaa4ea7696855dca8994855a2a4051f53"} Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.224692 4909 generic.go:334] "Generic (PLEG): container finished" podID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" containerID="6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" exitCode=0 Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.224772 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"098da7ec-6f47-4e30-8e5f-00b91d2c7c26","Type":"ContainerDied","Data":"6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc"} Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.224819 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"098da7ec-6f47-4e30-8e5f-00b91d2c7c26","Type":"ContainerDied","Data":"85b4f43872ae6a6db98d7bc81a7fa5cc06b1ffb16efb58e532f5fb0ff59bd14c"} Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.224845 4909 scope.go:117] "RemoveContainer" containerID="6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.225120 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.245588 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.245563458 podStartE2EDuration="2.245563458s" podCreationTimestamp="2025-11-26 09:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 09:31:28.236967263 +0000 UTC m=+9060.383178429" watchObservedRunningTime="2025-11-26 09:31:28.245563458 +0000 UTC m=+9060.391774624" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.261814 4909 scope.go:117] "RemoveContainer" containerID="6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.265738 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 09:31:28 crc kubenswrapper[4909]: E1126 09:31:28.266009 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc\": container with ID starting with 6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc not found: ID does not exist" containerID="6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.266044 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc"} err="failed to get container status \"6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc\": rpc error: code = NotFound desc = could not find container \"6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc\": container with ID starting with 6e50b270470ece433eaedf8e33cf75223b78d24369797243cd6392c2896097dc not found: ID does not exist" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.282815 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-combined-ca-bundle\") pod \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.283013 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbn9n\" (UniqueName: \"kubernetes.io/projected/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-kube-api-access-vbn9n\") pod \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.283189 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-config-data\") pod \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\" (UID: \"098da7ec-6f47-4e30-8e5f-00b91d2c7c26\") " Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.291316 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-kube-api-access-vbn9n" (OuterVolumeSpecName: "kube-api-access-vbn9n") pod "098da7ec-6f47-4e30-8e5f-00b91d2c7c26" (UID: "098da7ec-6f47-4e30-8e5f-00b91d2c7c26"). InnerVolumeSpecName "kube-api-access-vbn9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.316079 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "098da7ec-6f47-4e30-8e5f-00b91d2c7c26" (UID: "098da7ec-6f47-4e30-8e5f-00b91d2c7c26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.353183 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-config-data" (OuterVolumeSpecName: "config-data") pod "098da7ec-6f47-4e30-8e5f-00b91d2c7c26" (UID: "098da7ec-6f47-4e30-8e5f-00b91d2c7c26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.385675 4909 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.385706 4909 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.385718 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbn9n\" (UniqueName: \"kubernetes.io/projected/098da7ec-6f47-4e30-8e5f-00b91d2c7c26-kube-api-access-vbn9n\") on node \"crc\" DevicePath \"\"" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.520381 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa" path="/var/lib/kubelet/pods/a3e723ef-619a-4ed8-a8b0-5920ccc5dfaa/volumes" Nov 26 09:31:28 crc kubenswrapper[4909]: I1126 09:31:28.521544 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8256fc-4601-411d-9bfd-c86c73421537" path="/var/lib/kubelet/pods/da8256fc-4601-411d-9bfd-c86c73421537/volumes" Nov 26 09:31:29 crc kubenswrapper[4909]: I1126 09:31:29.237768 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0c9c1db-492a-44cb-9eb2-756ddcd00876","Type":"ContainerStarted","Data":"26d707779160a7517251f6a074143f919337e014be1bd76936fa736dd1b0bbad"} Nov 26 09:31:29 crc kubenswrapper[4909]: I1126 09:31:29.240101 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbb5caa6-8215-4021-91b6-1d27967f571d","Type":"ContainerStarted","Data":"496d97e21a4e0540534533ce905eabf3195241a0d08d20c18d2ba48218d11578"} Nov 26 09:31:29 crc kubenswrapper[4909]: I1126 09:31:29.240139 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbb5caa6-8215-4021-91b6-1d27967f571d","Type":"ContainerStarted","Data":"c4f80f25c7f540887a3968e6570f293380f31fa652c3040f0d5ff48a05608589"} Nov 26 09:31:29 crc kubenswrapper[4909]: I1126 09:31:29.240151 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbb5caa6-8215-4021-91b6-1d27967f571d","Type":"ContainerStarted","Data":"34d2bac4c0fd28255b8a2c0b850761f3d303aece225cc9f23dda6b35b37e77e5"} Nov 26 09:31:29 crc kubenswrapper[4909]: I1126 09:31:29.262456 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.262418538 podStartE2EDuration="3.262418538s" podCreationTimestamp="2025-11-26 09:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 09:31:29.259778546 +0000 UTC m=+9061.405989732" watchObservedRunningTime="2025-11-26 09:31:29.262418538 +0000 UTC m=+9061.408629704" Nov 26 09:31:29 crc kubenswrapper[4909]: I1126 09:31:29.300564 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.300545762 podStartE2EDuration="2.300545762s" podCreationTimestamp="2025-11-26 09:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 09:31:29.299191125 +0000 UTC m=+9061.445402291" watchObservedRunningTime="2025-11-26 09:31:29.300545762 +0000 UTC m=+9061.446756928" Nov 26 09:31:32 crc kubenswrapper[4909]: I1126 09:31:32.666914 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 09:31:32 crc kubenswrapper[4909]: I1126 09:31:32.667224 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 09:31:35 crc kubenswrapper[4909]: I1126 09:31:35.132674 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 26 09:31:36 crc kubenswrapper[4909]: I1126 09:31:36.723508 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 26 09:31:37 crc kubenswrapper[4909]: I1126 09:31:37.150020 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 09:31:37 crc kubenswrapper[4909]: I1126 09:31:37.150122 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 09:31:37 crc kubenswrapper[4909]: I1126 09:31:37.300778 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:31:37 crc kubenswrapper[4909]: I1126 09:31:37.300846 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:31:37 crc kubenswrapper[4909]: I1126 09:31:37.667259 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 09:31:37 crc kubenswrapper[4909]: I1126 09:31:37.667558 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 09:31:38 crc kubenswrapper[4909]: I1126 09:31:38.232970 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0c9c1db-492a-44cb-9eb2-756ddcd00876" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 09:31:38 crc kubenswrapper[4909]: I1126 09:31:38.233289 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0c9c1db-492a-44cb-9eb2-756ddcd00876" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 09:31:38 crc kubenswrapper[4909]: I1126 09:31:38.750836 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cbb5caa6-8215-4021-91b6-1d27967f571d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.209:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 09:31:38 crc kubenswrapper[4909]: I1126 09:31:38.750910 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cbb5caa6-8215-4021-91b6-1d27967f571d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.209:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.155281 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.156478 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.158152 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.161411 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.512556 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.518194 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.717009 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.728995 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 09:31:47 crc kubenswrapper[4909]: I1126 09:31:47.737063 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 09:31:48 crc kubenswrapper[4909]: I1126 09:31:48.536462 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 09:31:58 crc kubenswrapper[4909]: I1126 09:31:58.666856 4909 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod098da7ec-6f47-4e30-8e5f-00b91d2c7c26"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod098da7ec-6f47-4e30-8e5f-00b91d2c7c26] : Timed out while waiting for systemd to remove kubepods-besteffort-pod098da7ec_6f47_4e30_8e5f_00b91d2c7c26.slice" Nov 26 09:31:58 crc kubenswrapper[4909]: E1126 09:31:58.667383 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod098da7ec-6f47-4e30-8e5f-00b91d2c7c26] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod098da7ec-6f47-4e30-8e5f-00b91d2c7c26] : Timed out while waiting for systemd to remove kubepods-besteffort-pod098da7ec_6f47_4e30_8e5f_00b91d2c7c26.slice" pod="openstack/nova-scheduler-0" podUID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.676755 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.735467 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.748061 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.761180 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 09:31:59 crc kubenswrapper[4909]: E1126 09:31:59.761869 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" containerName="nova-scheduler-scheduler" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.762060 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" containerName="nova-scheduler-scheduler" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.762500 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" containerName="nova-scheduler-scheduler" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.764068 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.767312 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.781000 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.837757 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29npv\" (UniqueName: \"kubernetes.io/projected/2105779c-8f18-4582-ad9e-e071b51f7dbc-kube-api-access-29npv\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.838194 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2105779c-8f18-4582-ad9e-e071b51f7dbc-config-data\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.838228 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2105779c-8f18-4582-ad9e-e071b51f7dbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.939862 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2105779c-8f18-4582-ad9e-e071b51f7dbc-config-data\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.939905 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2105779c-8f18-4582-ad9e-e071b51f7dbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.940027 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29npv\" (UniqueName: \"kubernetes.io/projected/2105779c-8f18-4582-ad9e-e071b51f7dbc-kube-api-access-29npv\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.946723 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2105779c-8f18-4582-ad9e-e071b51f7dbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.952348 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2105779c-8f18-4582-ad9e-e071b51f7dbc-config-data\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:31:59 crc kubenswrapper[4909]: I1126 09:31:59.957089 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29npv\" (UniqueName: \"kubernetes.io/projected/2105779c-8f18-4582-ad9e-e071b51f7dbc-kube-api-access-29npv\") pod \"nova-scheduler-0\" (UID: \"2105779c-8f18-4582-ad9e-e071b51f7dbc\") " pod="openstack/nova-scheduler-0" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.087988 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.293209 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wlkh8"] Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.298634 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.320878 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlkh8"] Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.353664 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-utilities\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.353722 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5ls\" (UniqueName: \"kubernetes.io/projected/de92a744-611b-43b7-94aa-7ac25118e927-kube-api-access-dk5ls\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.353836 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-catalog-content\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.456216 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-utilities\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.456268 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5ls\" (UniqueName: \"kubernetes.io/projected/de92a744-611b-43b7-94aa-7ac25118e927-kube-api-access-dk5ls\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.456385 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-catalog-content\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.456868 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-catalog-content\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.456976 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-utilities\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.476527 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5ls\" (UniqueName: \"kubernetes.io/projected/de92a744-611b-43b7-94aa-7ac25118e927-kube-api-access-dk5ls\") pod \"community-operators-wlkh8\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.519825 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098da7ec-6f47-4e30-8e5f-00b91d2c7c26" path="/var/lib/kubelet/pods/098da7ec-6f47-4e30-8e5f-00b91d2c7c26/volumes" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.620507 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.632296 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:00 crc kubenswrapper[4909]: I1126 09:32:00.793777 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2105779c-8f18-4582-ad9e-e071b51f7dbc","Type":"ContainerStarted","Data":"14c3b51f6892d1689085a4b60d4f5b0c82a23af3355fc61316669a099d672e6d"} Nov 26 09:32:01 crc kubenswrapper[4909]: I1126 09:32:01.146186 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlkh8"] Nov 26 09:32:01 crc kubenswrapper[4909]: W1126 09:32:01.149573 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde92a744_611b_43b7_94aa_7ac25118e927.slice/crio-c73258a3a6e7d315c421b71c53a90a5fba12647ed6554d897778532b374763e1 WatchSource:0}: Error finding container c73258a3a6e7d315c421b71c53a90a5fba12647ed6554d897778532b374763e1: Status 404 returned error can't find the container with id c73258a3a6e7d315c421b71c53a90a5fba12647ed6554d897778532b374763e1 Nov 26 09:32:01 crc kubenswrapper[4909]: I1126 09:32:01.804217 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2105779c-8f18-4582-ad9e-e071b51f7dbc","Type":"ContainerStarted","Data":"3e7ceeb6ea4753818e152303f645e8efbba29bb411114b188db4ad317beac238"} Nov 26 09:32:01 crc kubenswrapper[4909]: I1126 09:32:01.808429 4909 generic.go:334] "Generic (PLEG): container finished" podID="de92a744-611b-43b7-94aa-7ac25118e927" containerID="f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6" exitCode=0 Nov 26 09:32:01 crc kubenswrapper[4909]: I1126 09:32:01.808509 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkh8" event={"ID":"de92a744-611b-43b7-94aa-7ac25118e927","Type":"ContainerDied","Data":"f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6"} Nov 26 09:32:01 crc kubenswrapper[4909]: I1126 09:32:01.808566 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkh8" event={"ID":"de92a744-611b-43b7-94aa-7ac25118e927","Type":"ContainerStarted","Data":"c73258a3a6e7d315c421b71c53a90a5fba12647ed6554d897778532b374763e1"} Nov 26 09:32:01 crc kubenswrapper[4909]: I1126 09:32:01.839282 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.83925237 podStartE2EDuration="2.83925237s" podCreationTimestamp="2025-11-26 09:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 09:32:01.827726254 +0000 UTC m=+9093.973937430" watchObservedRunningTime="2025-11-26 09:32:01.83925237 +0000 UTC m=+9093.985463576" Nov 26 09:32:03 crc kubenswrapper[4909]: I1126 09:32:03.832209 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkh8" event={"ID":"de92a744-611b-43b7-94aa-7ac25118e927","Type":"ContainerStarted","Data":"232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d"} Nov 26 09:32:04 crc kubenswrapper[4909]: I1126 09:32:04.844664 4909 generic.go:334] "Generic (PLEG): container finished" podID="de92a744-611b-43b7-94aa-7ac25118e927" containerID="232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d" exitCode=0 Nov 26 09:32:04 crc kubenswrapper[4909]: I1126 09:32:04.844770 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkh8" event={"ID":"de92a744-611b-43b7-94aa-7ac25118e927","Type":"ContainerDied","Data":"232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d"} Nov 26 09:32:05 crc kubenswrapper[4909]: I1126 09:32:05.088382 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 09:32:06 crc kubenswrapper[4909]: I1126 09:32:06.874092 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkh8" event={"ID":"de92a744-611b-43b7-94aa-7ac25118e927","Type":"ContainerStarted","Data":"253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a"} Nov 26 09:32:06 crc kubenswrapper[4909]: I1126 09:32:06.890609 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wlkh8" podStartSLOduration=3.340760045 podStartE2EDuration="6.89057603s" podCreationTimestamp="2025-11-26 09:32:00 +0000 UTC" firstStartedPulling="2025-11-26 09:32:01.810901814 +0000 UTC m=+9093.957112990" lastFinishedPulling="2025-11-26 09:32:05.360717779 +0000 UTC m=+9097.506928975" observedRunningTime="2025-11-26 09:32:06.890148479 +0000 UTC m=+9099.036359645" watchObservedRunningTime="2025-11-26 09:32:06.89057603 +0000 UTC m=+9099.036787196" Nov 26 09:32:07 crc kubenswrapper[4909]: I1126 09:32:07.300640 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:32:07 crc kubenswrapper[4909]: I1126 09:32:07.300728 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:32:10 crc kubenswrapper[4909]: I1126 09:32:10.088708 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 09:32:10 crc kubenswrapper[4909]: I1126 09:32:10.134841 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 09:32:10 crc kubenswrapper[4909]: I1126 09:32:10.633732 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:10 crc kubenswrapper[4909]: I1126 09:32:10.633808 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:10 crc kubenswrapper[4909]: I1126 09:32:10.701877 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:11 crc kubenswrapper[4909]: I1126 09:32:11.076120 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 09:32:20 crc kubenswrapper[4909]: I1126 09:32:20.686383 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:20 crc kubenswrapper[4909]: I1126 09:32:20.764045 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlkh8"] Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.049683 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wlkh8" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="registry-server" containerID="cri-o://253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a" gracePeriod=2 Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.575643 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.621488 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5ls\" (UniqueName: \"kubernetes.io/projected/de92a744-611b-43b7-94aa-7ac25118e927-kube-api-access-dk5ls\") pod \"de92a744-611b-43b7-94aa-7ac25118e927\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.621815 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-catalog-content\") pod \"de92a744-611b-43b7-94aa-7ac25118e927\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.621867 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-utilities\") pod \"de92a744-611b-43b7-94aa-7ac25118e927\" (UID: \"de92a744-611b-43b7-94aa-7ac25118e927\") " Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.623649 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-utilities" (OuterVolumeSpecName: "utilities") pod "de92a744-611b-43b7-94aa-7ac25118e927" (UID: "de92a744-611b-43b7-94aa-7ac25118e927"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.627579 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de92a744-611b-43b7-94aa-7ac25118e927-kube-api-access-dk5ls" (OuterVolumeSpecName: "kube-api-access-dk5ls") pod "de92a744-611b-43b7-94aa-7ac25118e927" (UID: "de92a744-611b-43b7-94aa-7ac25118e927"). InnerVolumeSpecName "kube-api-access-dk5ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.666191 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de92a744-611b-43b7-94aa-7ac25118e927" (UID: "de92a744-611b-43b7-94aa-7ac25118e927"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.724755 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5ls\" (UniqueName: \"kubernetes.io/projected/de92a744-611b-43b7-94aa-7ac25118e927-kube-api-access-dk5ls\") on node \"crc\" DevicePath \"\"" Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.724785 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:32:21 crc kubenswrapper[4909]: I1126 09:32:21.724796 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de92a744-611b-43b7-94aa-7ac25118e927-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.061125 4909 generic.go:334] "Generic (PLEG): container finished" podID="de92a744-611b-43b7-94aa-7ac25118e927" containerID="253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a" exitCode=0 Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.061163 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkh8" event={"ID":"de92a744-611b-43b7-94aa-7ac25118e927","Type":"ContainerDied","Data":"253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a"} Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.061202 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlkh8" event={"ID":"de92a744-611b-43b7-94aa-7ac25118e927","Type":"ContainerDied","Data":"c73258a3a6e7d315c421b71c53a90a5fba12647ed6554d897778532b374763e1"} Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.061208 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlkh8" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.061221 4909 scope.go:117] "RemoveContainer" containerID="253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.085807 4909 scope.go:117] "RemoveContainer" containerID="232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.114737 4909 scope.go:117] "RemoveContainer" containerID="f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.119082 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlkh8"] Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.127905 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wlkh8"] Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.164293 4909 scope.go:117] "RemoveContainer" containerID="253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a" Nov 26 09:32:22 crc kubenswrapper[4909]: E1126 09:32:22.164778 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a\": container with ID starting with 253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a not found: ID does not exist" containerID="253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.164811 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a"} err="failed to get container status \"253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a\": rpc error: code = NotFound desc = could not find container \"253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a\": container with ID starting with 253522b9229d5b6901ad9b9bcde977f029d7e87bffbadd771f53396bb7c3268a not found: ID does not exist" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.164833 4909 scope.go:117] "RemoveContainer" containerID="232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d" Nov 26 09:32:22 crc kubenswrapper[4909]: E1126 09:32:22.165184 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d\": container with ID starting with 232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d not found: ID does not exist" containerID="232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.165214 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d"} err="failed to get container status \"232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d\": rpc error: code = NotFound desc = could not find container \"232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d\": container with ID starting with 232dd035a5b62e97715f69663d1e370fa4bf9780c481bd14fa4ef4541993a15d not found: ID does not exist" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.165235 4909 scope.go:117] "RemoveContainer" containerID="f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6" Nov 26 09:32:22 crc kubenswrapper[4909]: E1126 09:32:22.165504 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6\": container with ID starting with f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6 not found: ID does not exist" containerID="f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.165525 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6"} err="failed to get container status \"f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6\": rpc error: code = NotFound desc = could not find container \"f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6\": container with ID starting with f65044037ea70fdaff8fc43bf2f0b09b41dfab3b98fe45aeb0616c80cd1959c6 not found: ID does not exist" Nov 26 09:32:22 crc kubenswrapper[4909]: I1126 09:32:22.511270 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de92a744-611b-43b7-94aa-7ac25118e927" path="/var/lib/kubelet/pods/de92a744-611b-43b7-94aa-7ac25118e927/volumes" Nov 26 09:32:37 crc kubenswrapper[4909]: I1126 09:32:37.301233 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:32:37 crc kubenswrapper[4909]: I1126 09:32:37.301989 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:32:37 crc kubenswrapper[4909]: I1126 09:32:37.302074 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:32:37 crc kubenswrapper[4909]: I1126 09:32:37.303405 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:32:37 crc kubenswrapper[4909]: I1126 09:32:37.303558 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" gracePeriod=600 Nov 26 09:32:37 crc kubenswrapper[4909]: E1126 09:32:37.428932 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:32:38 crc kubenswrapper[4909]: I1126 09:32:38.260792 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" exitCode=0 Nov 26 09:32:38 crc kubenswrapper[4909]: I1126 09:32:38.261202 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d"} Nov 26 09:32:38 crc kubenswrapper[4909]: I1126 09:32:38.261435 4909 scope.go:117] "RemoveContainer" containerID="f79c65b8b498e59164657f3aef6f9c00c5eb7f55e85d39244a3da61e2f374d62" Nov 26 09:32:38 crc kubenswrapper[4909]: I1126 09:32:38.262540 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:32:38 crc kubenswrapper[4909]: E1126 09:32:38.263232 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:32:48 crc kubenswrapper[4909]: I1126 09:32:48.513286 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:32:48 crc kubenswrapper[4909]: E1126 09:32:48.514297 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:33:01 crc kubenswrapper[4909]: I1126 09:33:01.498948 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:33:01 crc kubenswrapper[4909]: E1126 09:33:01.499663 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:33:16 crc kubenswrapper[4909]: I1126 09:33:16.500114 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:33:16 crc kubenswrapper[4909]: E1126 09:33:16.501521 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:33:27 crc kubenswrapper[4909]: I1126 09:33:27.499717 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:33:27 crc kubenswrapper[4909]: E1126 09:33:27.501145 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:33:39 crc kubenswrapper[4909]: I1126 09:33:39.499135 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:33:39 crc kubenswrapper[4909]: E1126 09:33:39.500062 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:33:50 crc kubenswrapper[4909]: I1126 09:33:50.499439 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:33:50 crc kubenswrapper[4909]: E1126 09:33:50.500081 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:34:01 crc kubenswrapper[4909]: I1126 09:34:01.498957 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:34:01 crc kubenswrapper[4909]: E1126 09:34:01.499781 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.142407 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jffb5"] Nov 26 09:34:11 crc kubenswrapper[4909]: E1126 09:34:11.143622 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="extract-content" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.143637 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="extract-content" Nov 26 09:34:11 crc kubenswrapper[4909]: E1126 09:34:11.143673 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="extract-utilities" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.143682 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="extract-utilities" Nov 26 09:34:11 crc kubenswrapper[4909]: E1126 09:34:11.143712 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="registry-server" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.143719 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="registry-server" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.144118 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="de92a744-611b-43b7-94aa-7ac25118e927" containerName="registry-server" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.145799 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.171167 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jffb5"] Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.197242 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-catalog-content\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.197387 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv78r\" (UniqueName: \"kubernetes.io/projected/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-kube-api-access-pv78r\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.197434 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-utilities\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.299727 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv78r\" (UniqueName: \"kubernetes.io/projected/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-kube-api-access-pv78r\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.299822 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-utilities\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.300125 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-catalog-content\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.301069 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-utilities\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.301106 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-catalog-content\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.320433 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv78r\" (UniqueName: \"kubernetes.io/projected/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-kube-api-access-pv78r\") pod \"certified-operators-jffb5\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:11 crc kubenswrapper[4909]: I1126 09:34:11.473180 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:12 crc kubenswrapper[4909]: I1126 09:34:12.088151 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jffb5"] Nov 26 09:34:12 crc kubenswrapper[4909]: I1126 09:34:12.477096 4909 generic.go:334] "Generic (PLEG): container finished" podID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerID="8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643" exitCode=0 Nov 26 09:34:12 crc kubenswrapper[4909]: I1126 09:34:12.477160 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jffb5" event={"ID":"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7","Type":"ContainerDied","Data":"8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643"} Nov 26 09:34:12 crc kubenswrapper[4909]: I1126 09:34:12.477417 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jffb5" event={"ID":"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7","Type":"ContainerStarted","Data":"3a10e0cc5234a95a91e3282b9caddb50319d81a92be35efeac7b29282d81b7f6"} Nov 26 09:34:12 crc kubenswrapper[4909]: I1126 09:34:12.499964 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:34:12 crc kubenswrapper[4909]: E1126 09:34:12.500255 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:34:14 crc kubenswrapper[4909]: I1126 09:34:14.516978 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jffb5" event={"ID":"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7","Type":"ContainerStarted","Data":"2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8"} Nov 26 09:34:15 crc kubenswrapper[4909]: I1126 09:34:15.525473 4909 generic.go:334] "Generic (PLEG): container finished" podID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerID="2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8" exitCode=0 Nov 26 09:34:15 crc kubenswrapper[4909]: I1126 09:34:15.525564 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jffb5" event={"ID":"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7","Type":"ContainerDied","Data":"2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8"} Nov 26 09:34:16 crc kubenswrapper[4909]: I1126 09:34:16.540672 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jffb5" event={"ID":"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7","Type":"ContainerStarted","Data":"72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da"} Nov 26 09:34:16 crc kubenswrapper[4909]: I1126 09:34:16.571800 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jffb5" podStartSLOduration=1.946396337 podStartE2EDuration="5.57177311s" podCreationTimestamp="2025-11-26 09:34:11 +0000 UTC" firstStartedPulling="2025-11-26 09:34:12.479028856 +0000 UTC m=+9224.625240062" lastFinishedPulling="2025-11-26 09:34:16.104405649 +0000 UTC m=+9228.250616835" observedRunningTime="2025-11-26 09:34:16.559477393 +0000 UTC m=+9228.705688569" watchObservedRunningTime="2025-11-26 09:34:16.57177311 +0000 UTC m=+9228.717984296" Nov 26 09:34:21 crc kubenswrapper[4909]: I1126 09:34:21.473936 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:21 crc kubenswrapper[4909]: I1126 09:34:21.474572 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:21 crc kubenswrapper[4909]: I1126 09:34:21.568735 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:21 crc kubenswrapper[4909]: I1126 09:34:21.657081 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:21 crc kubenswrapper[4909]: I1126 09:34:21.816096 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jffb5"] Nov 26 09:34:23 crc kubenswrapper[4909]: I1126 09:34:23.627330 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jffb5" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="registry-server" containerID="cri-o://72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da" gracePeriod=2 Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.154240 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.208795 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-catalog-content\") pod \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.208953 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv78r\" (UniqueName: \"kubernetes.io/projected/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-kube-api-access-pv78r\") pod \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.209215 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-utilities\") pod \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\" (UID: \"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7\") " Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.210793 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-utilities" (OuterVolumeSpecName: "utilities") pod "4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" (UID: "4dcb8e9e-1802-4bb1-bd99-13c2f81aded7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.219955 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-kube-api-access-pv78r" (OuterVolumeSpecName: "kube-api-access-pv78r") pod "4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" (UID: "4dcb8e9e-1802-4bb1-bd99-13c2f81aded7"). InnerVolumeSpecName "kube-api-access-pv78r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.267382 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" (UID: "4dcb8e9e-1802-4bb1-bd99-13c2f81aded7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.311256 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.311291 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.311303 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv78r\" (UniqueName: \"kubernetes.io/projected/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7-kube-api-access-pv78r\") on node \"crc\" DevicePath \"\"" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.641101 4909 generic.go:334] "Generic (PLEG): container finished" podID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerID="72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da" exitCode=0 Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.641153 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jffb5" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.641159 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jffb5" event={"ID":"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7","Type":"ContainerDied","Data":"72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da"} Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.641310 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jffb5" event={"ID":"4dcb8e9e-1802-4bb1-bd99-13c2f81aded7","Type":"ContainerDied","Data":"3a10e0cc5234a95a91e3282b9caddb50319d81a92be35efeac7b29282d81b7f6"} Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.641334 4909 scope.go:117] "RemoveContainer" containerID="72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.670038 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jffb5"] Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.675379 4909 scope.go:117] "RemoveContainer" containerID="2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.684179 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jffb5"] Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.705875 4909 scope.go:117] "RemoveContainer" containerID="8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.762264 4909 scope.go:117] "RemoveContainer" containerID="72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da" Nov 26 09:34:24 crc kubenswrapper[4909]: E1126 09:34:24.762745 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da\": container with ID starting with 72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da not found: ID does not exist" containerID="72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.762889 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da"} err="failed to get container status \"72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da\": rpc error: code = NotFound desc = could not find container \"72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da\": container with ID starting with 72631d1dc8cef454c92834128b4a4fb959ee4bf5cef54cd486e50851476c55da not found: ID does not exist" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.763092 4909 scope.go:117] "RemoveContainer" containerID="2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8" Nov 26 09:34:24 crc kubenswrapper[4909]: E1126 09:34:24.763643 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8\": container with ID starting with 2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8 not found: ID does not exist" containerID="2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.763750 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8"} err="failed to get container status \"2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8\": rpc error: code = NotFound desc = could not find container \"2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8\": container with ID starting with 2dfd9c2aea64a91a253dd9d149c5ecd645ef6640022f3afa29643c998bb999b8 not found: ID does not exist" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.763862 4909 scope.go:117] "RemoveContainer" containerID="8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643" Nov 26 09:34:24 crc kubenswrapper[4909]: E1126 09:34:24.764183 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643\": container with ID starting with 8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643 not found: ID does not exist" containerID="8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643" Nov 26 09:34:24 crc kubenswrapper[4909]: I1126 09:34:24.764302 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643"} err="failed to get container status \"8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643\": rpc error: code = NotFound desc = could not find container \"8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643\": container with ID starting with 8ba2642b597fa0020bc815875e6d96e66cbea47d5c16fdd433ad27ca01b48643 not found: ID does not exist" Nov 26 09:34:25 crc kubenswrapper[4909]: I1126 09:34:25.498984 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:34:25 crc kubenswrapper[4909]: E1126 09:34:25.499405 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:34:26 crc kubenswrapper[4909]: I1126 09:34:26.510639 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" path="/var/lib/kubelet/pods/4dcb8e9e-1802-4bb1-bd99-13c2f81aded7/volumes" Nov 26 09:34:37 crc kubenswrapper[4909]: I1126 09:34:37.500434 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:34:37 crc kubenswrapper[4909]: E1126 09:34:37.501552 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:34:51 crc kubenswrapper[4909]: I1126 09:34:51.500548 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:34:51 crc kubenswrapper[4909]: E1126 09:34:51.501623 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:35:03 crc kubenswrapper[4909]: I1126 09:35:03.499360 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:35:03 crc kubenswrapper[4909]: E1126 09:35:03.500252 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:35:16 crc kubenswrapper[4909]: I1126 09:35:16.909501 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zcxr8"] Nov 26 09:35:16 crc kubenswrapper[4909]: E1126 09:35:16.910730 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="extract-utilities" Nov 26 09:35:16 crc kubenswrapper[4909]: I1126 09:35:16.910750 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="extract-utilities" Nov 26 09:35:16 crc kubenswrapper[4909]: E1126 09:35:16.910799 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="registry-server" Nov 26 09:35:16 crc kubenswrapper[4909]: I1126 09:35:16.910808 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="registry-server" Nov 26 09:35:16 crc kubenswrapper[4909]: E1126 09:35:16.910854 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="extract-content" Nov 26 09:35:16 crc kubenswrapper[4909]: I1126 09:35:16.910860 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="extract-content" Nov 26 09:35:16 crc kubenswrapper[4909]: I1126 09:35:16.911078 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcb8e9e-1802-4bb1-bd99-13c2f81aded7" containerName="registry-server" Nov 26 09:35:16 crc kubenswrapper[4909]: I1126 09:35:16.912791 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:16 crc kubenswrapper[4909]: I1126 09:35:16.923338 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcxr8"] Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.013698 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ts4f\" (UniqueName: \"kubernetes.io/projected/a21f8e00-69e7-4e9c-aade-fe529489e78d-kube-api-access-7ts4f\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.014158 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-catalog-content\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.014512 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-utilities\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.116644 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-catalog-content\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.116793 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-utilities\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.116855 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ts4f\" (UniqueName: \"kubernetes.io/projected/a21f8e00-69e7-4e9c-aade-fe529489e78d-kube-api-access-7ts4f\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.117332 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-utilities\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.117434 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-catalog-content\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.142905 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ts4f\" (UniqueName: \"kubernetes.io/projected/a21f8e00-69e7-4e9c-aade-fe529489e78d-kube-api-access-7ts4f\") pod \"redhat-operators-zcxr8\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.272142 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.499473 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:35:17 crc kubenswrapper[4909]: E1126 09:35:17.500187 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:35:17 crc kubenswrapper[4909]: I1126 09:35:17.768114 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcxr8"] Nov 26 09:35:18 crc kubenswrapper[4909]: I1126 09:35:18.323320 4909 generic.go:334] "Generic (PLEG): container finished" podID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerID="837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da" exitCode=0 Nov 26 09:35:18 crc kubenswrapper[4909]: I1126 09:35:18.323417 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcxr8" event={"ID":"a21f8e00-69e7-4e9c-aade-fe529489e78d","Type":"ContainerDied","Data":"837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da"} Nov 26 09:35:18 crc kubenswrapper[4909]: I1126 09:35:18.323687 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcxr8" event={"ID":"a21f8e00-69e7-4e9c-aade-fe529489e78d","Type":"ContainerStarted","Data":"fadd7a9cbe611a62466afd953a5855d0d59f6b84131b7fee611a532fd94f2293"} Nov 26 09:35:19 crc kubenswrapper[4909]: I1126 09:35:19.333980 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcxr8" event={"ID":"a21f8e00-69e7-4e9c-aade-fe529489e78d","Type":"ContainerStarted","Data":"9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46"} Nov 26 09:35:21 crc kubenswrapper[4909]: I1126 09:35:21.359417 4909 generic.go:334] "Generic (PLEG): container finished" podID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerID="9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46" exitCode=0 Nov 26 09:35:21 crc kubenswrapper[4909]: I1126 09:35:21.359479 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcxr8" event={"ID":"a21f8e00-69e7-4e9c-aade-fe529489e78d","Type":"ContainerDied","Data":"9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46"} Nov 26 09:35:22 crc kubenswrapper[4909]: I1126 09:35:22.377812 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcxr8" event={"ID":"a21f8e00-69e7-4e9c-aade-fe529489e78d","Type":"ContainerStarted","Data":"892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90"} Nov 26 09:35:22 crc kubenswrapper[4909]: I1126 09:35:22.406803 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zcxr8" podStartSLOduration=2.957933408 podStartE2EDuration="6.406783151s" podCreationTimestamp="2025-11-26 09:35:16 +0000 UTC" firstStartedPulling="2025-11-26 09:35:18.325643555 +0000 UTC m=+9290.471854741" lastFinishedPulling="2025-11-26 09:35:21.774493278 +0000 UTC m=+9293.920704484" observedRunningTime="2025-11-26 09:35:22.401055254 +0000 UTC m=+9294.547266460" watchObservedRunningTime="2025-11-26 09:35:22.406783151 +0000 UTC m=+9294.552994317" Nov 26 09:35:27 crc kubenswrapper[4909]: I1126 09:35:27.273299 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:27 crc kubenswrapper[4909]: I1126 09:35:27.273956 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:27 crc kubenswrapper[4909]: I1126 09:35:27.332651 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:27 crc kubenswrapper[4909]: I1126 09:35:27.498916 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:27 crc kubenswrapper[4909]: I1126 09:35:27.578917 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcxr8"] Nov 26 09:35:28 crc kubenswrapper[4909]: I1126 09:35:28.514693 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:35:28 crc kubenswrapper[4909]: E1126 09:35:28.515588 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:35:29 crc kubenswrapper[4909]: I1126 09:35:29.461208 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zcxr8" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="registry-server" containerID="cri-o://892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90" gracePeriod=2 Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.058188 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.219449 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-utilities\") pod \"a21f8e00-69e7-4e9c-aade-fe529489e78d\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.219734 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-catalog-content\") pod \"a21f8e00-69e7-4e9c-aade-fe529489e78d\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.219954 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ts4f\" (UniqueName: \"kubernetes.io/projected/a21f8e00-69e7-4e9c-aade-fe529489e78d-kube-api-access-7ts4f\") pod \"a21f8e00-69e7-4e9c-aade-fe529489e78d\" (UID: \"a21f8e00-69e7-4e9c-aade-fe529489e78d\") " Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.222538 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-utilities" (OuterVolumeSpecName: "utilities") pod "a21f8e00-69e7-4e9c-aade-fe529489e78d" (UID: "a21f8e00-69e7-4e9c-aade-fe529489e78d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.223392 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.291835 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a21f8e00-69e7-4e9c-aade-fe529489e78d" (UID: "a21f8e00-69e7-4e9c-aade-fe529489e78d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.325570 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a21f8e00-69e7-4e9c-aade-fe529489e78d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.476053 4909 generic.go:334] "Generic (PLEG): container finished" podID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerID="892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90" exitCode=0 Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.476107 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcxr8" event={"ID":"a21f8e00-69e7-4e9c-aade-fe529489e78d","Type":"ContainerDied","Data":"892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90"} Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.476138 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcxr8" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.476160 4909 scope.go:117] "RemoveContainer" containerID="892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.476146 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcxr8" event={"ID":"a21f8e00-69e7-4e9c-aade-fe529489e78d","Type":"ContainerDied","Data":"fadd7a9cbe611a62466afd953a5855d0d59f6b84131b7fee611a532fd94f2293"} Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.508471 4909 scope.go:117] "RemoveContainer" containerID="9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.741299 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21f8e00-69e7-4e9c-aade-fe529489e78d-kube-api-access-7ts4f" (OuterVolumeSpecName: "kube-api-access-7ts4f") pod "a21f8e00-69e7-4e9c-aade-fe529489e78d" (UID: "a21f8e00-69e7-4e9c-aade-fe529489e78d"). InnerVolumeSpecName "kube-api-access-7ts4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.779947 4909 scope.go:117] "RemoveContainer" containerID="837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.836312 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ts4f\" (UniqueName: \"kubernetes.io/projected/a21f8e00-69e7-4e9c-aade-fe529489e78d-kube-api-access-7ts4f\") on node \"crc\" DevicePath \"\"" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.865006 4909 scope.go:117] "RemoveContainer" containerID="892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90" Nov 26 09:35:30 crc kubenswrapper[4909]: E1126 09:35:30.865561 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90\": container with ID starting with 892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90 not found: ID does not exist" containerID="892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.865634 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90"} err="failed to get container status \"892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90\": rpc error: code = NotFound desc = could not find container \"892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90\": container with ID starting with 892e9d4ddbbb8f22d4ca1d37a42fa24826b8b50af7e97088ac51706451d21f90 not found: ID does not exist" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.865663 4909 scope.go:117] "RemoveContainer" containerID="9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46" Nov 26 09:35:30 crc kubenswrapper[4909]: E1126 09:35:30.866119 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46\": container with ID starting with 9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46 not found: ID does not exist" containerID="9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.866152 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46"} err="failed to get container status \"9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46\": rpc error: code = NotFound desc = could not find container \"9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46\": container with ID starting with 9bc0ae9d0d6e374edcec973e129d32cf825c71e990cd1079fd51d5f62380eb46 not found: ID does not exist" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.866173 4909 scope.go:117] "RemoveContainer" containerID="837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da" Nov 26 09:35:30 crc kubenswrapper[4909]: E1126 09:35:30.866449 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da\": container with ID starting with 837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da not found: ID does not exist" containerID="837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.866476 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da"} err="failed to get container status \"837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da\": rpc error: code = NotFound desc = could not find container \"837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da\": container with ID starting with 837b2736edb39ac2e393db3817de8f230ef365442092db69f805e07978d449da not found: ID does not exist" Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.875466 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcxr8"] Nov 26 09:35:30 crc kubenswrapper[4909]: I1126 09:35:30.887109 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zcxr8"] Nov 26 09:35:32 crc kubenswrapper[4909]: I1126 09:35:32.514584 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" path="/var/lib/kubelet/pods/a21f8e00-69e7-4e9c-aade-fe529489e78d/volumes" Nov 26 09:35:41 crc kubenswrapper[4909]: I1126 09:35:41.500144 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:35:41 crc kubenswrapper[4909]: E1126 09:35:41.501117 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:35:54 crc kubenswrapper[4909]: I1126 09:35:54.499497 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:35:54 crc kubenswrapper[4909]: E1126 09:35:54.500194 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:36:05 crc kubenswrapper[4909]: I1126 09:36:05.499457 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:36:05 crc kubenswrapper[4909]: E1126 09:36:05.501310 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:36:16 crc kubenswrapper[4909]: I1126 09:36:16.500875 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:36:16 crc kubenswrapper[4909]: E1126 09:36:16.502163 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:36:30 crc kubenswrapper[4909]: I1126 09:36:30.499509 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:36:30 crc kubenswrapper[4909]: E1126 09:36:30.500263 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:36:45 crc kubenswrapper[4909]: I1126 09:36:45.499264 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:36:45 crc kubenswrapper[4909]: E1126 09:36:45.500463 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:36:58 crc kubenswrapper[4909]: I1126 09:36:58.505345 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:36:58 crc kubenswrapper[4909]: E1126 09:36:58.506185 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:37:09 crc kubenswrapper[4909]: I1126 09:37:09.502445 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:37:09 crc kubenswrapper[4909]: E1126 09:37:09.504378 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:37:16 crc kubenswrapper[4909]: I1126 09:37:16.681938 4909 generic.go:334] "Generic (PLEG): container finished" podID="b787ec2d-08c2-4282-9a94-fe5dc36fb14c" containerID="b62a0ae8de69bbc9f7d7aff8c4a946c0714710f92b1ad0e869e3fc91ce158681" exitCode=0 Nov 26 09:37:16 crc kubenswrapper[4909]: I1126 09:37:16.682528 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" event={"ID":"b787ec2d-08c2-4282-9a94-fe5dc36fb14c","Type":"ContainerDied","Data":"b62a0ae8de69bbc9f7d7aff8c4a946c0714710f92b1ad0e869e3fc91ce158681"} Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.265507 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.357902 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-inventory\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358001 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-1\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358161 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ceph\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358203 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-1\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358235 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-1\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358279 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-0\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358297 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ssh-key\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358332 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-combined-ca-bundle\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358371 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-0\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358397 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms7v8\" (UniqueName: \"kubernetes.io/projected/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-kube-api-access-ms7v8\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.358434 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-0\") pod \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\" (UID: \"b787ec2d-08c2-4282-9a94-fe5dc36fb14c\") " Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.365053 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.365081 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ceph" (OuterVolumeSpecName: "ceph") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.382425 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-kube-api-access-ms7v8" (OuterVolumeSpecName: "kube-api-access-ms7v8") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "kube-api-access-ms7v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.392028 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-inventory" (OuterVolumeSpecName: "inventory") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.394637 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.394801 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.396136 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.396899 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.399790 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.403066 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.403194 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b787ec2d-08c2-4282-9a94-fe5dc36fb14c" (UID: "b787ec2d-08c2-4282-9a94-fe5dc36fb14c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459818 4909 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ceph\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459845 4909 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459857 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459865 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459875 4909 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459882 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459890 4909 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459900 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms7v8\" (UniqueName: \"kubernetes.io/projected/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-kube-api-access-ms7v8\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459908 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459916 4909 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.459924 4909 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b787ec2d-08c2-4282-9a94-fe5dc36fb14c-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.721939 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" event={"ID":"b787ec2d-08c2-4282-9a94-fe5dc36fb14c","Type":"ContainerDied","Data":"44b5032ffc6eed022cdd8bb46473d2f436650c3cdf5c46755bc6d89c685c3840"} Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.721984 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b5032ffc6eed022cdd8bb46473d2f436650c3cdf5c46755bc6d89c685c3840" Nov 26 09:37:18 crc kubenswrapper[4909]: I1126 09:37:18.722072 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff" Nov 26 09:37:24 crc kubenswrapper[4909]: I1126 09:37:24.499700 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:37:24 crc kubenswrapper[4909]: E1126 09:37:24.500800 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:37:37 crc kubenswrapper[4909]: I1126 09:37:37.499369 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:37:37 crc kubenswrapper[4909]: I1126 09:37:37.970453 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"e96ca0717a6fedd9cb0d2ad5895dc8f81d31bc03ea5de712a38f102288bdb2cb"} Nov 26 09:39:31 crc kubenswrapper[4909]: I1126 09:39:31.337752 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 26 09:39:31 crc kubenswrapper[4909]: I1126 09:39:31.338588 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="c15d16fb-aa11-4bdd-b044-c8fd74f693b8" containerName="adoption" containerID="cri-o://3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032" gracePeriod=30 Nov 26 09:39:37 crc kubenswrapper[4909]: I1126 09:39:37.301369 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:39:37 crc kubenswrapper[4909]: I1126 09:39:37.302054 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.350312 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.448577 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnf44\" (UniqueName: \"kubernetes.io/projected/c15d16fb-aa11-4bdd-b044-c8fd74f693b8-kube-api-access-wnf44\") pod \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.449714 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\") pod \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\" (UID: \"c15d16fb-aa11-4bdd-b044-c8fd74f693b8\") " Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.456700 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15d16fb-aa11-4bdd-b044-c8fd74f693b8-kube-api-access-wnf44" (OuterVolumeSpecName: "kube-api-access-wnf44") pod "c15d16fb-aa11-4bdd-b044-c8fd74f693b8" (UID: "c15d16fb-aa11-4bdd-b044-c8fd74f693b8"). InnerVolumeSpecName "kube-api-access-wnf44". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.481279 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8" (OuterVolumeSpecName: "mariadb-data") pod "c15d16fb-aa11-4bdd-b044-c8fd74f693b8" (UID: "c15d16fb-aa11-4bdd-b044-c8fd74f693b8"). InnerVolumeSpecName "pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.553692 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnf44\" (UniqueName: \"kubernetes.io/projected/c15d16fb-aa11-4bdd-b044-c8fd74f693b8-kube-api-access-wnf44\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.554023 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\") on node \"crc\" " Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.595081 4909 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.595301 4909 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8") on node "crc" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.658550 4909 reconciler_common.go:293] "Volume detached for volume \"pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add7576-e4e6-46e7-a887-33ba8111e1b8\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.766483 4909 generic.go:334] "Generic (PLEG): container finished" podID="c15d16fb-aa11-4bdd-b044-c8fd74f693b8" containerID="3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032" exitCode=137 Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.766549 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c15d16fb-aa11-4bdd-b044-c8fd74f693b8","Type":"ContainerDied","Data":"3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032"} Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.766725 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c15d16fb-aa11-4bdd-b044-c8fd74f693b8","Type":"ContainerDied","Data":"9ad5d69ded62a2e391d71edccfe17d7bf1f60fefefebe69f582dd176c0bf78b1"} Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.766752 4909 scope.go:117] "RemoveContainer" containerID="3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.767178 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.807149 4909 scope.go:117] "RemoveContainer" containerID="3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032" Nov 26 09:40:02 crc kubenswrapper[4909]: E1126 09:40:02.807700 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032\": container with ID starting with 3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032 not found: ID does not exist" containerID="3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.807752 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032"} err="failed to get container status \"3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032\": rpc error: code = NotFound desc = could not find container \"3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032\": container with ID starting with 3b83ab264f23e41f2c57311c4dfb83d781c6d8cf9f6138b00e61beb7554cb032 not found: ID does not exist" Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.824301 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 26 09:40:02 crc kubenswrapper[4909]: I1126 09:40:02.839682 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Nov 26 09:40:03 crc kubenswrapper[4909]: I1126 09:40:03.535637 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 26 09:40:03 crc kubenswrapper[4909]: I1126 09:40:03.536219 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="ade07b13-f382-46fa-805b-3c6d479a6a13" containerName="adoption" containerID="cri-o://884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5" gracePeriod=30 Nov 26 09:40:04 crc kubenswrapper[4909]: I1126 09:40:04.512808 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15d16fb-aa11-4bdd-b044-c8fd74f693b8" path="/var/lib/kubelet/pods/c15d16fb-aa11-4bdd-b044-c8fd74f693b8/volumes" Nov 26 09:40:07 crc kubenswrapper[4909]: I1126 09:40:07.300763 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:40:07 crc kubenswrapper[4909]: I1126 09:40:07.301521 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.738708 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6bdbg"] Nov 26 09:40:28 crc kubenswrapper[4909]: E1126 09:40:28.739752 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="registry-server" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.739769 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="registry-server" Nov 26 09:40:28 crc kubenswrapper[4909]: E1126 09:40:28.739808 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b787ec2d-08c2-4282-9a94-fe5dc36fb14c" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.739818 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="b787ec2d-08c2-4282-9a94-fe5dc36fb14c" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 26 09:40:28 crc kubenswrapper[4909]: E1126 09:40:28.739846 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="extract-content" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.739855 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="extract-content" Nov 26 09:40:28 crc kubenswrapper[4909]: E1126 09:40:28.739884 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="extract-utilities" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.739894 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="extract-utilities" Nov 26 09:40:28 crc kubenswrapper[4909]: E1126 09:40:28.739928 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15d16fb-aa11-4bdd-b044-c8fd74f693b8" containerName="adoption" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.740122 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15d16fb-aa11-4bdd-b044-c8fd74f693b8" containerName="adoption" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.740353 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="b787ec2d-08c2-4282-9a94-fe5dc36fb14c" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.740390 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15d16fb-aa11-4bdd-b044-c8fd74f693b8" containerName="adoption" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.740412 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a21f8e00-69e7-4e9c-aade-fe529489e78d" containerName="registry-server" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.742690 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.784359 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bdbg"] Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.883293 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-utilities\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.883655 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmsnl\" (UniqueName: \"kubernetes.io/projected/c185097f-c62c-41b5-87d6-20dd4b660edc-kube-api-access-pmsnl\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.883762 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-catalog-content\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.985940 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-utilities\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.986002 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmsnl\" (UniqueName: \"kubernetes.io/projected/c185097f-c62c-41b5-87d6-20dd4b660edc-kube-api-access-pmsnl\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.986108 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-catalog-content\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.986651 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-utilities\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:28 crc kubenswrapper[4909]: I1126 09:40:28.986667 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-catalog-content\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:29 crc kubenswrapper[4909]: I1126 09:40:29.003147 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmsnl\" (UniqueName: \"kubernetes.io/projected/c185097f-c62c-41b5-87d6-20dd4b660edc-kube-api-access-pmsnl\") pod \"redhat-marketplace-6bdbg\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:29 crc kubenswrapper[4909]: I1126 09:40:29.091426 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:29 crc kubenswrapper[4909]: I1126 09:40:29.763748 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bdbg"] Nov 26 09:40:29 crc kubenswrapper[4909]: W1126 09:40:29.766973 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc185097f_c62c_41b5_87d6_20dd4b660edc.slice/crio-6490269131e9fac482ae9c203d93d81bb2d2244d4e0c6d4c7ffc98b135ec40bf WatchSource:0}: Error finding container 6490269131e9fac482ae9c203d93d81bb2d2244d4e0c6d4c7ffc98b135ec40bf: Status 404 returned error can't find the container with id 6490269131e9fac482ae9c203d93d81bb2d2244d4e0c6d4c7ffc98b135ec40bf Nov 26 09:40:30 crc kubenswrapper[4909]: I1126 09:40:30.135949 4909 generic.go:334] "Generic (PLEG): container finished" podID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerID="316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6" exitCode=0 Nov 26 09:40:30 crc kubenswrapper[4909]: I1126 09:40:30.136175 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bdbg" event={"ID":"c185097f-c62c-41b5-87d6-20dd4b660edc","Type":"ContainerDied","Data":"316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6"} Nov 26 09:40:30 crc kubenswrapper[4909]: I1126 09:40:30.136197 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bdbg" event={"ID":"c185097f-c62c-41b5-87d6-20dd4b660edc","Type":"ContainerStarted","Data":"6490269131e9fac482ae9c203d93d81bb2d2244d4e0c6d4c7ffc98b135ec40bf"} Nov 26 09:40:30 crc kubenswrapper[4909]: I1126 09:40:30.139144 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:40:31 crc kubenswrapper[4909]: I1126 09:40:31.148346 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bdbg" event={"ID":"c185097f-c62c-41b5-87d6-20dd4b660edc","Type":"ContainerStarted","Data":"36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20"} Nov 26 09:40:32 crc kubenswrapper[4909]: I1126 09:40:32.166704 4909 generic.go:334] "Generic (PLEG): container finished" podID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerID="36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20" exitCode=0 Nov 26 09:40:32 crc kubenswrapper[4909]: I1126 09:40:32.166805 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bdbg" event={"ID":"c185097f-c62c-41b5-87d6-20dd4b660edc","Type":"ContainerDied","Data":"36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20"} Nov 26 09:40:33 crc kubenswrapper[4909]: I1126 09:40:33.182560 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bdbg" event={"ID":"c185097f-c62c-41b5-87d6-20dd4b660edc","Type":"ContainerStarted","Data":"a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e"} Nov 26 09:40:33 crc kubenswrapper[4909]: I1126 09:40:33.220685 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6bdbg" podStartSLOduration=2.7751772729999997 podStartE2EDuration="5.220664427s" podCreationTimestamp="2025-11-26 09:40:28 +0000 UTC" firstStartedPulling="2025-11-26 09:40:30.138848758 +0000 UTC m=+9602.285059924" lastFinishedPulling="2025-11-26 09:40:32.584335902 +0000 UTC m=+9604.730547078" observedRunningTime="2025-11-26 09:40:33.208885803 +0000 UTC m=+9605.355096989" watchObservedRunningTime="2025-11-26 09:40:33.220664427 +0000 UTC m=+9605.366875603" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.150327 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.198002 4909 generic.go:334] "Generic (PLEG): container finished" podID="ade07b13-f382-46fa-805b-3c6d479a6a13" containerID="884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5" exitCode=137 Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.198559 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ade07b13-f382-46fa-805b-3c6d479a6a13","Type":"ContainerDied","Data":"884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5"} Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.198615 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.198639 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"ade07b13-f382-46fa-805b-3c6d479a6a13","Type":"ContainerDied","Data":"e9e5310c8d0f6d67cd7a4adcaa476fda78fe890bc206ba3cbb5efbbac583fcb4"} Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.198665 4909 scope.go:117] "RemoveContainer" containerID="884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.218906 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z59b2\" (UniqueName: \"kubernetes.io/projected/ade07b13-f382-46fa-805b-3c6d479a6a13-kube-api-access-z59b2\") pod \"ade07b13-f382-46fa-805b-3c6d479a6a13\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.221505 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c\") pod \"ade07b13-f382-46fa-805b-3c6d479a6a13\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.221666 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ade07b13-f382-46fa-805b-3c6d479a6a13-ovn-data-cert\") pod \"ade07b13-f382-46fa-805b-3c6d479a6a13\" (UID: \"ade07b13-f382-46fa-805b-3c6d479a6a13\") " Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.225555 4909 scope.go:117] "RemoveContainer" containerID="884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5" Nov 26 09:40:34 crc kubenswrapper[4909]: E1126 09:40:34.229282 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5\": container with ID starting with 884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5 not found: ID does not exist" containerID="884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.229447 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade07b13-f382-46fa-805b-3c6d479a6a13-kube-api-access-z59b2" (OuterVolumeSpecName: "kube-api-access-z59b2") pod "ade07b13-f382-46fa-805b-3c6d479a6a13" (UID: "ade07b13-f382-46fa-805b-3c6d479a6a13"). InnerVolumeSpecName "kube-api-access-z59b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.229629 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5"} err="failed to get container status \"884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5\": rpc error: code = NotFound desc = could not find container \"884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5\": container with ID starting with 884bd9eabebe14113b72c79eff01306e9682dfa66e619c0a04028817c468e7c5 not found: ID does not exist" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.230660 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade07b13-f382-46fa-805b-3c6d479a6a13-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "ade07b13-f382-46fa-805b-3c6d479a6a13" (UID: "ade07b13-f382-46fa-805b-3c6d479a6a13"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.255773 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c" (OuterVolumeSpecName: "ovn-data") pod "ade07b13-f382-46fa-805b-3c6d479a6a13" (UID: "ade07b13-f382-46fa-805b-3c6d479a6a13"). InnerVolumeSpecName "pvc-c84be150-311b-43e8-972d-6239b995b74c". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.324576 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z59b2\" (UniqueName: \"kubernetes.io/projected/ade07b13-f382-46fa-805b-3c6d479a6a13-kube-api-access-z59b2\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.324661 4909 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c84be150-311b-43e8-972d-6239b995b74c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c\") on node \"crc\" " Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.324678 4909 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/ade07b13-f382-46fa-805b-3c6d479a6a13-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.350216 4909 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.350400 4909 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c84be150-311b-43e8-972d-6239b995b74c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c") on node "crc" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.426899 4909 reconciler_common.go:293] "Volume detached for volume \"pvc-c84be150-311b-43e8-972d-6239b995b74c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84be150-311b-43e8-972d-6239b995b74c\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.554831 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 26 09:40:34 crc kubenswrapper[4909]: I1126 09:40:34.565229 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Nov 26 09:40:36 crc kubenswrapper[4909]: I1126 09:40:36.516785 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade07b13-f382-46fa-805b-3c6d479a6a13" path="/var/lib/kubelet/pods/ade07b13-f382-46fa-805b-3c6d479a6a13/volumes" Nov 26 09:40:37 crc kubenswrapper[4909]: I1126 09:40:37.300959 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:40:37 crc kubenswrapper[4909]: I1126 09:40:37.301045 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:40:37 crc kubenswrapper[4909]: I1126 09:40:37.301111 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:40:37 crc kubenswrapper[4909]: I1126 09:40:37.301963 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e96ca0717a6fedd9cb0d2ad5895dc8f81d31bc03ea5de712a38f102288bdb2cb"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:40:37 crc kubenswrapper[4909]: I1126 09:40:37.302061 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://e96ca0717a6fedd9cb0d2ad5895dc8f81d31bc03ea5de712a38f102288bdb2cb" gracePeriod=600 Nov 26 09:40:38 crc kubenswrapper[4909]: I1126 09:40:38.252667 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="e96ca0717a6fedd9cb0d2ad5895dc8f81d31bc03ea5de712a38f102288bdb2cb" exitCode=0 Nov 26 09:40:38 crc kubenswrapper[4909]: I1126 09:40:38.252704 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"e96ca0717a6fedd9cb0d2ad5895dc8f81d31bc03ea5de712a38f102288bdb2cb"} Nov 26 09:40:38 crc kubenswrapper[4909]: I1126 09:40:38.253236 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800"} Nov 26 09:40:38 crc kubenswrapper[4909]: I1126 09:40:38.253261 4909 scope.go:117] "RemoveContainer" containerID="78fca2fd104fc81f31e255e0fe650898ce2703e79a1bf6aa1004dd6d73c0936d" Nov 26 09:40:39 crc kubenswrapper[4909]: I1126 09:40:39.092911 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:39 crc kubenswrapper[4909]: I1126 09:40:39.093249 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:39 crc kubenswrapper[4909]: I1126 09:40:39.150750 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:39 crc kubenswrapper[4909]: I1126 09:40:39.347531 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:39 crc kubenswrapper[4909]: I1126 09:40:39.415580 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bdbg"] Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.288806 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6bdbg" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="registry-server" containerID="cri-o://a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e" gracePeriod=2 Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.861212 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.891513 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmsnl\" (UniqueName: \"kubernetes.io/projected/c185097f-c62c-41b5-87d6-20dd4b660edc-kube-api-access-pmsnl\") pod \"c185097f-c62c-41b5-87d6-20dd4b660edc\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.891822 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-catalog-content\") pod \"c185097f-c62c-41b5-87d6-20dd4b660edc\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.898349 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c185097f-c62c-41b5-87d6-20dd4b660edc-kube-api-access-pmsnl" (OuterVolumeSpecName: "kube-api-access-pmsnl") pod "c185097f-c62c-41b5-87d6-20dd4b660edc" (UID: "c185097f-c62c-41b5-87d6-20dd4b660edc"). InnerVolumeSpecName "kube-api-access-pmsnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.903008 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-utilities\") pod \"c185097f-c62c-41b5-87d6-20dd4b660edc\" (UID: \"c185097f-c62c-41b5-87d6-20dd4b660edc\") " Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.903748 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-utilities" (OuterVolumeSpecName: "utilities") pod "c185097f-c62c-41b5-87d6-20dd4b660edc" (UID: "c185097f-c62c-41b5-87d6-20dd4b660edc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.904205 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.904317 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmsnl\" (UniqueName: \"kubernetes.io/projected/c185097f-c62c-41b5-87d6-20dd4b660edc-kube-api-access-pmsnl\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:41 crc kubenswrapper[4909]: I1126 09:40:41.934495 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c185097f-c62c-41b5-87d6-20dd4b660edc" (UID: "c185097f-c62c-41b5-87d6-20dd4b660edc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.006243 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c185097f-c62c-41b5-87d6-20dd4b660edc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.302785 4909 generic.go:334] "Generic (PLEG): container finished" podID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerID="a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e" exitCode=0 Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.302850 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bdbg" event={"ID":"c185097f-c62c-41b5-87d6-20dd4b660edc","Type":"ContainerDied","Data":"a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e"} Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.303138 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bdbg" event={"ID":"c185097f-c62c-41b5-87d6-20dd4b660edc","Type":"ContainerDied","Data":"6490269131e9fac482ae9c203d93d81bb2d2244d4e0c6d4c7ffc98b135ec40bf"} Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.303169 4909 scope.go:117] "RemoveContainer" containerID="a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.302887 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bdbg" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.351900 4909 scope.go:117] "RemoveContainer" containerID="36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.353234 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bdbg"] Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.363173 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bdbg"] Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.387899 4909 scope.go:117] "RemoveContainer" containerID="316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.447270 4909 scope.go:117] "RemoveContainer" containerID="a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e" Nov 26 09:40:42 crc kubenswrapper[4909]: E1126 09:40:42.447752 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e\": container with ID starting with a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e not found: ID does not exist" containerID="a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.447789 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e"} err="failed to get container status \"a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e\": rpc error: code = NotFound desc = could not find container \"a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e\": container with ID starting with a4935ebfb88b0b890981bef7cd6224375bf6d6bc395ae6f00e802d421b5d1b3e not found: ID does not exist" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.447815 4909 scope.go:117] "RemoveContainer" containerID="36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20" Nov 26 09:40:42 crc kubenswrapper[4909]: E1126 09:40:42.448209 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20\": container with ID starting with 36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20 not found: ID does not exist" containerID="36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.448237 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20"} err="failed to get container status \"36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20\": rpc error: code = NotFound desc = could not find container \"36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20\": container with ID starting with 36185645ce3932b5fb75986c48ab9de936909ca5d8842a40592723fd217d8a20 not found: ID does not exist" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.448254 4909 scope.go:117] "RemoveContainer" containerID="316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6" Nov 26 09:40:42 crc kubenswrapper[4909]: E1126 09:40:42.448469 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6\": container with ID starting with 316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6 not found: ID does not exist" containerID="316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.448492 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6"} err="failed to get container status \"316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6\": rpc error: code = NotFound desc = could not find container \"316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6\": container with ID starting with 316bffc2321b062ab1fea45118b935bd81cf3a14aed3e2d4a2c2fb1c1b8d04b6 not found: ID does not exist" Nov 26 09:40:42 crc kubenswrapper[4909]: I1126 09:40:42.512172 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" path="/var/lib/kubelet/pods/c185097f-c62c-41b5-87d6-20dd4b660edc/volumes" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.959799 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-887dn/must-gather-vhv4d"] Nov 26 09:41:40 crc kubenswrapper[4909]: E1126 09:41:40.961961 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="registry-server" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.962065 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="registry-server" Nov 26 09:41:40 crc kubenswrapper[4909]: E1126 09:41:40.962136 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade07b13-f382-46fa-805b-3c6d479a6a13" containerName="adoption" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.962188 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade07b13-f382-46fa-805b-3c6d479a6a13" containerName="adoption" Nov 26 09:41:40 crc kubenswrapper[4909]: E1126 09:41:40.962282 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="extract-utilities" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.962334 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="extract-utilities" Nov 26 09:41:40 crc kubenswrapper[4909]: E1126 09:41:40.962385 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="extract-content" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.962436 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="extract-content" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.962713 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade07b13-f382-46fa-805b-3c6d479a6a13" containerName="adoption" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.962799 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="c185097f-c62c-41b5-87d6-20dd4b660edc" containerName="registry-server" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.964292 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.971440 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-887dn/must-gather-vhv4d"] Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.974755 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-887dn"/"openshift-service-ca.crt" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.974798 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-887dn"/"kube-root-ca.crt" Nov 26 09:41:40 crc kubenswrapper[4909]: I1126 09:41:40.975650 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-887dn"/"default-dockercfg-5mxng" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.158234 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-must-gather-output\") pod \"must-gather-vhv4d\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.158799 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47mg\" (UniqueName: \"kubernetes.io/projected/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-kube-api-access-c47mg\") pod \"must-gather-vhv4d\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.261859 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c47mg\" (UniqueName: \"kubernetes.io/projected/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-kube-api-access-c47mg\") pod \"must-gather-vhv4d\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.262240 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-must-gather-output\") pod \"must-gather-vhv4d\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.262658 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-must-gather-output\") pod \"must-gather-vhv4d\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.287862 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c47mg\" (UniqueName: \"kubernetes.io/projected/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-kube-api-access-c47mg\") pod \"must-gather-vhv4d\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.289705 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.836629 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-887dn/must-gather-vhv4d"] Nov 26 09:41:41 crc kubenswrapper[4909]: I1126 09:41:41.988477 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/must-gather-vhv4d" event={"ID":"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac","Type":"ContainerStarted","Data":"1b3c6b181f0ebfae0cae2c3dd3b8c9d2269aaa73896008d4b318b151bc89cbb2"} Nov 26 09:41:46 crc kubenswrapper[4909]: I1126 09:41:46.032397 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/must-gather-vhv4d" event={"ID":"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac","Type":"ContainerStarted","Data":"5845e6de92c3e440fe908579d4f6378b0ca48a14eca3e44d1f65d02d605ad81e"} Nov 26 09:41:46 crc kubenswrapper[4909]: I1126 09:41:46.032949 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/must-gather-vhv4d" event={"ID":"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac","Type":"ContainerStarted","Data":"458b610cae6256dd0dbf2c99639e6f9e05a137a1001624c112161c1d9656cac3"} Nov 26 09:41:46 crc kubenswrapper[4909]: I1126 09:41:46.065168 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-887dn/must-gather-vhv4d" podStartSLOduration=2.547096946 podStartE2EDuration="6.065152331s" podCreationTimestamp="2025-11-26 09:41:40 +0000 UTC" firstStartedPulling="2025-11-26 09:41:41.84662723 +0000 UTC m=+9673.992838396" lastFinishedPulling="2025-11-26 09:41:45.364682615 +0000 UTC m=+9677.510893781" observedRunningTime="2025-11-26 09:41:46.062335644 +0000 UTC m=+9678.208546800" watchObservedRunningTime="2025-11-26 09:41:46.065152331 +0000 UTC m=+9678.211363497" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.107536 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-887dn/crc-debug-wbmsx"] Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.109986 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.267057 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr26j\" (UniqueName: \"kubernetes.io/projected/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-kube-api-access-xr26j\") pod \"crc-debug-wbmsx\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.267606 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-host\") pod \"crc-debug-wbmsx\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.385926 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr26j\" (UniqueName: \"kubernetes.io/projected/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-kube-api-access-xr26j\") pod \"crc-debug-wbmsx\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.386147 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-host\") pod \"crc-debug-wbmsx\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.386304 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-host\") pod \"crc-debug-wbmsx\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.412288 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr26j\" (UniqueName: \"kubernetes.io/projected/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-kube-api-access-xr26j\") pod \"crc-debug-wbmsx\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: I1126 09:41:50.448171 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:41:50 crc kubenswrapper[4909]: W1126 09:41:50.480522 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a9fc08_40b1_4669_b1b4_ecfa99b53fca.slice/crio-7209ceb31a0809fe177e23bdfabee48618784c65783f198f50ae6545adda5200 WatchSource:0}: Error finding container 7209ceb31a0809fe177e23bdfabee48618784c65783f198f50ae6545adda5200: Status 404 returned error can't find the container with id 7209ceb31a0809fe177e23bdfabee48618784c65783f198f50ae6545adda5200 Nov 26 09:41:51 crc kubenswrapper[4909]: I1126 09:41:51.099869 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/crc-debug-wbmsx" event={"ID":"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca","Type":"ContainerStarted","Data":"7209ceb31a0809fe177e23bdfabee48618784c65783f198f50ae6545adda5200"} Nov 26 09:42:03 crc kubenswrapper[4909]: I1126 09:42:03.230139 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/crc-debug-wbmsx" event={"ID":"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca","Type":"ContainerStarted","Data":"e3165e6c8bd1bdab5df797951f0f9a0caaf3bd50716f920cbf20bffc00f4d020"} Nov 26 09:42:03 crc kubenswrapper[4909]: I1126 09:42:03.262913 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-887dn/crc-debug-wbmsx" podStartSLOduration=1.69422348 podStartE2EDuration="13.262889939s" podCreationTimestamp="2025-11-26 09:41:50 +0000 UTC" firstStartedPulling="2025-11-26 09:41:50.48230693 +0000 UTC m=+9682.628518096" lastFinishedPulling="2025-11-26 09:42:02.050973389 +0000 UTC m=+9694.197184555" observedRunningTime="2025-11-26 09:42:03.256174855 +0000 UTC m=+9695.402386021" watchObservedRunningTime="2025-11-26 09:42:03.262889939 +0000 UTC m=+9695.409101105" Nov 26 09:42:22 crc kubenswrapper[4909]: I1126 09:42:22.431425 4909 generic.go:334] "Generic (PLEG): container finished" podID="e7a9fc08-40b1-4669-b1b4-ecfa99b53fca" containerID="e3165e6c8bd1bdab5df797951f0f9a0caaf3bd50716f920cbf20bffc00f4d020" exitCode=0 Nov 26 09:42:22 crc kubenswrapper[4909]: I1126 09:42:22.431511 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/crc-debug-wbmsx" event={"ID":"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca","Type":"ContainerDied","Data":"e3165e6c8bd1bdab5df797951f0f9a0caaf3bd50716f920cbf20bffc00f4d020"} Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.568104 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.610572 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-887dn/crc-debug-wbmsx"] Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.627901 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-887dn/crc-debug-wbmsx"] Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.634176 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr26j\" (UniqueName: \"kubernetes.io/projected/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-kube-api-access-xr26j\") pod \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.634361 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-host\") pod \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\" (UID: \"e7a9fc08-40b1-4669-b1b4-ecfa99b53fca\") " Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.634433 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-host" (OuterVolumeSpecName: "host") pod "e7a9fc08-40b1-4669-b1b4-ecfa99b53fca" (UID: "e7a9fc08-40b1-4669-b1b4-ecfa99b53fca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.634834 4909 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-host\") on node \"crc\" DevicePath \"\"" Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.640624 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-kube-api-access-xr26j" (OuterVolumeSpecName: "kube-api-access-xr26j") pod "e7a9fc08-40b1-4669-b1b4-ecfa99b53fca" (UID: "e7a9fc08-40b1-4669-b1b4-ecfa99b53fca"). InnerVolumeSpecName "kube-api-access-xr26j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:42:23 crc kubenswrapper[4909]: I1126 09:42:23.736634 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr26j\" (UniqueName: \"kubernetes.io/projected/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca-kube-api-access-xr26j\") on node \"crc\" DevicePath \"\"" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.455669 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7209ceb31a0809fe177e23bdfabee48618784c65783f198f50ae6545adda5200" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.455760 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-wbmsx" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.514422 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a9fc08-40b1-4669-b1b4-ecfa99b53fca" path="/var/lib/kubelet/pods/e7a9fc08-40b1-4669-b1b4-ecfa99b53fca/volumes" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.818264 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-887dn/crc-debug-cvnmn"] Nov 26 09:42:24 crc kubenswrapper[4909]: E1126 09:42:24.819819 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a9fc08-40b1-4669-b1b4-ecfa99b53fca" containerName="container-00" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.819901 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a9fc08-40b1-4669-b1b4-ecfa99b53fca" containerName="container-00" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.820148 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a9fc08-40b1-4669-b1b4-ecfa99b53fca" containerName="container-00" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.820933 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.859935 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6tg\" (UniqueName: \"kubernetes.io/projected/a2392738-ad60-4ec5-b68f-619c428358f4-kube-api-access-7b6tg\") pod \"crc-debug-cvnmn\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.860625 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2392738-ad60-4ec5-b68f-619c428358f4-host\") pod \"crc-debug-cvnmn\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.962324 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b6tg\" (UniqueName: \"kubernetes.io/projected/a2392738-ad60-4ec5-b68f-619c428358f4-kube-api-access-7b6tg\") pod \"crc-debug-cvnmn\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.962506 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2392738-ad60-4ec5-b68f-619c428358f4-host\") pod \"crc-debug-cvnmn\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.962884 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2392738-ad60-4ec5-b68f-619c428358f4-host\") pod \"crc-debug-cvnmn\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:24 crc kubenswrapper[4909]: I1126 09:42:24.981481 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b6tg\" (UniqueName: \"kubernetes.io/projected/a2392738-ad60-4ec5-b68f-619c428358f4-kube-api-access-7b6tg\") pod \"crc-debug-cvnmn\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:25 crc kubenswrapper[4909]: I1126 09:42:25.147104 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:25 crc kubenswrapper[4909]: I1126 09:42:25.466728 4909 generic.go:334] "Generic (PLEG): container finished" podID="a2392738-ad60-4ec5-b68f-619c428358f4" containerID="9ae10be89265f4a3e76dd248e2962a44457e804e4aa67df7fe4a6ea5b05fef7a" exitCode=1 Nov 26 09:42:25 crc kubenswrapper[4909]: I1126 09:42:25.466806 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/crc-debug-cvnmn" event={"ID":"a2392738-ad60-4ec5-b68f-619c428358f4","Type":"ContainerDied","Data":"9ae10be89265f4a3e76dd248e2962a44457e804e4aa67df7fe4a6ea5b05fef7a"} Nov 26 09:42:25 crc kubenswrapper[4909]: I1126 09:42:25.467040 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/crc-debug-cvnmn" event={"ID":"a2392738-ad60-4ec5-b68f-619c428358f4","Type":"ContainerStarted","Data":"6a74b633e6b8ccd5bcf74597011e06551ca2a18478383f532f7aabbc461dc792"} Nov 26 09:42:25 crc kubenswrapper[4909]: I1126 09:42:25.507462 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-887dn/crc-debug-cvnmn"] Nov 26 09:42:25 crc kubenswrapper[4909]: I1126 09:42:25.517585 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-887dn/crc-debug-cvnmn"] Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.240955 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b6jdk"] Nov 26 09:42:26 crc kubenswrapper[4909]: E1126 09:42:26.241830 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2392738-ad60-4ec5-b68f-619c428358f4" containerName="container-00" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.241845 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2392738-ad60-4ec5-b68f-619c428358f4" containerName="container-00" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.242115 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2392738-ad60-4ec5-b68f-619c428358f4" containerName="container-00" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.244430 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.258021 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b6jdk"] Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.289215 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xkt\" (UniqueName: \"kubernetes.io/projected/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-kube-api-access-d8xkt\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.289321 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-catalog-content\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.289406 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-utilities\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.391659 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-catalog-content\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.391767 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-utilities\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.391838 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8xkt\" (UniqueName: \"kubernetes.io/projected/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-kube-api-access-d8xkt\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.392489 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-catalog-content\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.392738 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-utilities\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.424378 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8xkt\" (UniqueName: \"kubernetes.io/projected/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-kube-api-access-d8xkt\") pod \"community-operators-b6jdk\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.564215 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.568918 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.696225 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b6tg\" (UniqueName: \"kubernetes.io/projected/a2392738-ad60-4ec5-b68f-619c428358f4-kube-api-access-7b6tg\") pod \"a2392738-ad60-4ec5-b68f-619c428358f4\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.696615 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2392738-ad60-4ec5-b68f-619c428358f4-host\") pod \"a2392738-ad60-4ec5-b68f-619c428358f4\" (UID: \"a2392738-ad60-4ec5-b68f-619c428358f4\") " Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.696981 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2392738-ad60-4ec5-b68f-619c428358f4-host" (OuterVolumeSpecName: "host") pod "a2392738-ad60-4ec5-b68f-619c428358f4" (UID: "a2392738-ad60-4ec5-b68f-619c428358f4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.697089 4909 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2392738-ad60-4ec5-b68f-619c428358f4-host\") on node \"crc\" DevicePath \"\"" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.701288 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2392738-ad60-4ec5-b68f-619c428358f4-kube-api-access-7b6tg" (OuterVolumeSpecName: "kube-api-access-7b6tg") pod "a2392738-ad60-4ec5-b68f-619c428358f4" (UID: "a2392738-ad60-4ec5-b68f-619c428358f4"). InnerVolumeSpecName "kube-api-access-7b6tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:42:26 crc kubenswrapper[4909]: I1126 09:42:26.799597 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b6tg\" (UniqueName: \"kubernetes.io/projected/a2392738-ad60-4ec5-b68f-619c428358f4-kube-api-access-7b6tg\") on node \"crc\" DevicePath \"\"" Nov 26 09:42:27 crc kubenswrapper[4909]: I1126 09:42:27.127742 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b6jdk"] Nov 26 09:42:27 crc kubenswrapper[4909]: W1126 09:42:27.127852 4909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ccb23e1_dd34_46ad_8876_842cbf3c9cc8.slice/crio-2dcc21f7ff0fc0ac5a663d5bf810f6b25cd3b8d39fc50e81346c4976e4b54979 WatchSource:0}: Error finding container 2dcc21f7ff0fc0ac5a663d5bf810f6b25cd3b8d39fc50e81346c4976e4b54979: Status 404 returned error can't find the container with id 2dcc21f7ff0fc0ac5a663d5bf810f6b25cd3b8d39fc50e81346c4976e4b54979 Nov 26 09:42:27 crc kubenswrapper[4909]: I1126 09:42:27.485304 4909 scope.go:117] "RemoveContainer" containerID="9ae10be89265f4a3e76dd248e2962a44457e804e4aa67df7fe4a6ea5b05fef7a" Nov 26 09:42:27 crc kubenswrapper[4909]: I1126 09:42:27.485308 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/crc-debug-cvnmn" Nov 26 09:42:27 crc kubenswrapper[4909]: I1126 09:42:27.487404 4909 generic.go:334] "Generic (PLEG): container finished" podID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerID="28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431" exitCode=0 Nov 26 09:42:27 crc kubenswrapper[4909]: I1126 09:42:27.487479 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6jdk" event={"ID":"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8","Type":"ContainerDied","Data":"28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431"} Nov 26 09:42:27 crc kubenswrapper[4909]: I1126 09:42:27.487538 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6jdk" event={"ID":"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8","Type":"ContainerStarted","Data":"2dcc21f7ff0fc0ac5a663d5bf810f6b25cd3b8d39fc50e81346c4976e4b54979"} Nov 26 09:42:28 crc kubenswrapper[4909]: I1126 09:42:28.514891 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2392738-ad60-4ec5-b68f-619c428358f4" path="/var/lib/kubelet/pods/a2392738-ad60-4ec5-b68f-619c428358f4/volumes" Nov 26 09:42:28 crc kubenswrapper[4909]: I1126 09:42:28.516039 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6jdk" event={"ID":"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8","Type":"ContainerStarted","Data":"0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc"} Nov 26 09:42:30 crc kubenswrapper[4909]: I1126 09:42:30.525693 4909 generic.go:334] "Generic (PLEG): container finished" podID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerID="0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc" exitCode=0 Nov 26 09:42:30 crc kubenswrapper[4909]: I1126 09:42:30.525774 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6jdk" event={"ID":"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8","Type":"ContainerDied","Data":"0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc"} Nov 26 09:42:31 crc kubenswrapper[4909]: I1126 09:42:31.537907 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6jdk" event={"ID":"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8","Type":"ContainerStarted","Data":"e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9"} Nov 26 09:42:31 crc kubenswrapper[4909]: I1126 09:42:31.554673 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b6jdk" podStartSLOduration=2.118227051 podStartE2EDuration="5.554657133s" podCreationTimestamp="2025-11-26 09:42:26 +0000 UTC" firstStartedPulling="2025-11-26 09:42:27.491625704 +0000 UTC m=+9719.637836870" lastFinishedPulling="2025-11-26 09:42:30.928055786 +0000 UTC m=+9723.074266952" observedRunningTime="2025-11-26 09:42:31.553893452 +0000 UTC m=+9723.700104628" watchObservedRunningTime="2025-11-26 09:42:31.554657133 +0000 UTC m=+9723.700868299" Nov 26 09:42:36 crc kubenswrapper[4909]: I1126 09:42:36.565242 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:36 crc kubenswrapper[4909]: I1126 09:42:36.566973 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:36 crc kubenswrapper[4909]: I1126 09:42:36.643860 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:37 crc kubenswrapper[4909]: I1126 09:42:37.301437 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:42:37 crc kubenswrapper[4909]: I1126 09:42:37.301509 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:42:37 crc kubenswrapper[4909]: I1126 09:42:37.658925 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:37 crc kubenswrapper[4909]: I1126 09:42:37.723384 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b6jdk"] Nov 26 09:42:39 crc kubenswrapper[4909]: I1126 09:42:39.624864 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b6jdk" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="registry-server" containerID="cri-o://e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9" gracePeriod=2 Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.224100 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.286615 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-catalog-content\") pod \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.286693 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8xkt\" (UniqueName: \"kubernetes.io/projected/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-kube-api-access-d8xkt\") pod \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.286734 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-utilities\") pod \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\" (UID: \"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8\") " Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.287861 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-utilities" (OuterVolumeSpecName: "utilities") pod "3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" (UID: "3ccb23e1-dd34-46ad-8876-842cbf3c9cc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.291020 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.295197 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-kube-api-access-d8xkt" (OuterVolumeSpecName: "kube-api-access-d8xkt") pod "3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" (UID: "3ccb23e1-dd34-46ad-8876-842cbf3c9cc8"). InnerVolumeSpecName "kube-api-access-d8xkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.340235 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" (UID: "3ccb23e1-dd34-46ad-8876-842cbf3c9cc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.395644 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8xkt\" (UniqueName: \"kubernetes.io/projected/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-kube-api-access-d8xkt\") on node \"crc\" DevicePath \"\"" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.395698 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.644533 4909 generic.go:334] "Generic (PLEG): container finished" podID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerID="e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9" exitCode=0 Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.644633 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6jdk" event={"ID":"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8","Type":"ContainerDied","Data":"e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9"} Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.644687 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6jdk" event={"ID":"3ccb23e1-dd34-46ad-8876-842cbf3c9cc8","Type":"ContainerDied","Data":"2dcc21f7ff0fc0ac5a663d5bf810f6b25cd3b8d39fc50e81346c4976e4b54979"} Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.644691 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6jdk" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.644794 4909 scope.go:117] "RemoveContainer" containerID="e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.693441 4909 scope.go:117] "RemoveContainer" containerID="0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.696813 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b6jdk"] Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.712402 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b6jdk"] Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.720378 4909 scope.go:117] "RemoveContainer" containerID="28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.793431 4909 scope.go:117] "RemoveContainer" containerID="e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9" Nov 26 09:42:40 crc kubenswrapper[4909]: E1126 09:42:40.794151 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9\": container with ID starting with e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9 not found: ID does not exist" containerID="e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.794203 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9"} err="failed to get container status \"e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9\": rpc error: code = NotFound desc = could not find container \"e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9\": container with ID starting with e88b380b7e28e15d9ab01face5eaabf6a718df2b9c624086644d80c28ff6b9c9 not found: ID does not exist" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.794247 4909 scope.go:117] "RemoveContainer" containerID="0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc" Nov 26 09:42:40 crc kubenswrapper[4909]: E1126 09:42:40.794798 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc\": container with ID starting with 0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc not found: ID does not exist" containerID="0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.794847 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc"} err="failed to get container status \"0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc\": rpc error: code = NotFound desc = could not find container \"0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc\": container with ID starting with 0c7e650073a0f5ccf7027500f699c6d1e102ac929f30dd680ab4f9af80f41ebc not found: ID does not exist" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.794882 4909 scope.go:117] "RemoveContainer" containerID="28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431" Nov 26 09:42:40 crc kubenswrapper[4909]: E1126 09:42:40.795320 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431\": container with ID starting with 28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431 not found: ID does not exist" containerID="28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431" Nov 26 09:42:40 crc kubenswrapper[4909]: I1126 09:42:40.795385 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431"} err="failed to get container status \"28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431\": rpc error: code = NotFound desc = could not find container \"28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431\": container with ID starting with 28cef4b98f22cf45215e5d457584a4bcc8aa987aaac951b559f3e8a91cce0431 not found: ID does not exist" Nov 26 09:42:42 crc kubenswrapper[4909]: I1126 09:42:42.515244 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" path="/var/lib/kubelet/pods/3ccb23e1-dd34-46ad-8876-842cbf3c9cc8/volumes" Nov 26 09:43:07 crc kubenswrapper[4909]: I1126 09:43:07.301009 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:43:07 crc kubenswrapper[4909]: I1126 09:43:07.301678 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:43:37 crc kubenswrapper[4909]: I1126 09:43:37.300854 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:43:37 crc kubenswrapper[4909]: I1126 09:43:37.301679 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:43:37 crc kubenswrapper[4909]: I1126 09:43:37.301786 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:43:37 crc kubenswrapper[4909]: I1126 09:43:37.303505 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:43:37 crc kubenswrapper[4909]: I1126 09:43:37.303745 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" gracePeriod=600 Nov 26 09:43:37 crc kubenswrapper[4909]: E1126 09:43:37.442566 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:43:38 crc kubenswrapper[4909]: I1126 09:43:38.390412 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" exitCode=0 Nov 26 09:43:38 crc kubenswrapper[4909]: I1126 09:43:38.390534 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800"} Nov 26 09:43:38 crc kubenswrapper[4909]: I1126 09:43:38.390778 4909 scope.go:117] "RemoveContainer" containerID="e96ca0717a6fedd9cb0d2ad5895dc8f81d31bc03ea5de712a38f102288bdb2cb" Nov 26 09:43:38 crc kubenswrapper[4909]: I1126 09:43:38.391778 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:43:38 crc kubenswrapper[4909]: E1126 09:43:38.392311 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:43:51 crc kubenswrapper[4909]: I1126 09:43:51.499100 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:43:51 crc kubenswrapper[4909]: E1126 09:43:51.500428 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:44:03 crc kubenswrapper[4909]: I1126 09:44:03.499292 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:44:03 crc kubenswrapper[4909]: E1126 09:44:03.500258 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:44:16 crc kubenswrapper[4909]: I1126 09:44:16.499565 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:44:16 crc kubenswrapper[4909]: E1126 09:44:16.500371 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:44:31 crc kubenswrapper[4909]: I1126 09:44:31.499975 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:44:31 crc kubenswrapper[4909]: E1126 09:44:31.501054 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.156389 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d4gmd"] Nov 26 09:44:41 crc kubenswrapper[4909]: E1126 09:44:41.157807 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="extract-content" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.157832 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="extract-content" Nov 26 09:44:41 crc kubenswrapper[4909]: E1126 09:44:41.157870 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="extract-utilities" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.157882 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="extract-utilities" Nov 26 09:44:41 crc kubenswrapper[4909]: E1126 09:44:41.157925 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="registry-server" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.157939 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="registry-server" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.158283 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ccb23e1-dd34-46ad-8876-842cbf3c9cc8" containerName="registry-server" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.160957 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.170762 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4gmd"] Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.300813 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtr8\" (UniqueName: \"kubernetes.io/projected/1863a776-f6b2-4c2d-a30a-9a4f89866618-kube-api-access-5gtr8\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.301139 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-catalog-content\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.301261 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-utilities\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.402962 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-catalog-content\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.403084 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-utilities\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.403169 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gtr8\" (UniqueName: \"kubernetes.io/projected/1863a776-f6b2-4c2d-a30a-9a4f89866618-kube-api-access-5gtr8\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.403977 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-utilities\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.404034 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-catalog-content\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.422782 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gtr8\" (UniqueName: \"kubernetes.io/projected/1863a776-f6b2-4c2d-a30a-9a4f89866618-kube-api-access-5gtr8\") pod \"certified-operators-d4gmd\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:41 crc kubenswrapper[4909]: I1126 09:44:41.486580 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:42 crc kubenswrapper[4909]: I1126 09:44:42.016774 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4gmd"] Nov 26 09:44:42 crc kubenswrapper[4909]: I1126 09:44:42.215777 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerStarted","Data":"6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25"} Nov 26 09:44:42 crc kubenswrapper[4909]: I1126 09:44:42.216141 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerStarted","Data":"de40678718b5acecae288b29cd8568118635e8c7df8bf2ea1579b728233e3b26"} Nov 26 09:44:43 crc kubenswrapper[4909]: I1126 09:44:43.231846 4909 generic.go:334] "Generic (PLEG): container finished" podID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerID="6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25" exitCode=0 Nov 26 09:44:43 crc kubenswrapper[4909]: I1126 09:44:43.231927 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerDied","Data":"6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25"} Nov 26 09:44:44 crc kubenswrapper[4909]: I1126 09:44:44.499023 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:44:44 crc kubenswrapper[4909]: E1126 09:44:44.499601 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:44:45 crc kubenswrapper[4909]: I1126 09:44:45.254907 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerStarted","Data":"d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a"} Nov 26 09:44:46 crc kubenswrapper[4909]: I1126 09:44:46.276011 4909 generic.go:334] "Generic (PLEG): container finished" podID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerID="d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a" exitCode=0 Nov 26 09:44:46 crc kubenswrapper[4909]: I1126 09:44:46.276070 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerDied","Data":"d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a"} Nov 26 09:44:48 crc kubenswrapper[4909]: I1126 09:44:48.301787 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerStarted","Data":"84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3"} Nov 26 09:44:48 crc kubenswrapper[4909]: I1126 09:44:48.333197 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d4gmd" podStartSLOduration=3.636174064 podStartE2EDuration="7.333179456s" podCreationTimestamp="2025-11-26 09:44:41 +0000 UTC" firstStartedPulling="2025-11-26 09:44:43.235646663 +0000 UTC m=+9855.381857829" lastFinishedPulling="2025-11-26 09:44:46.932652055 +0000 UTC m=+9859.078863221" observedRunningTime="2025-11-26 09:44:48.328486418 +0000 UTC m=+9860.474697584" watchObservedRunningTime="2025-11-26 09:44:48.333179456 +0000 UTC m=+9860.479390612" Nov 26 09:44:51 crc kubenswrapper[4909]: I1126 09:44:51.486890 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:51 crc kubenswrapper[4909]: I1126 09:44:51.487542 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:51 crc kubenswrapper[4909]: I1126 09:44:51.541793 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:52 crc kubenswrapper[4909]: I1126 09:44:52.406712 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:52 crc kubenswrapper[4909]: I1126 09:44:52.473895 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4gmd"] Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.364198 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d4gmd" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="registry-server" containerID="cri-o://84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3" gracePeriod=2 Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.878309 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.947279 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-catalog-content\") pod \"1863a776-f6b2-4c2d-a30a-9a4f89866618\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.947632 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gtr8\" (UniqueName: \"kubernetes.io/projected/1863a776-f6b2-4c2d-a30a-9a4f89866618-kube-api-access-5gtr8\") pod \"1863a776-f6b2-4c2d-a30a-9a4f89866618\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.947857 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-utilities\") pod \"1863a776-f6b2-4c2d-a30a-9a4f89866618\" (UID: \"1863a776-f6b2-4c2d-a30a-9a4f89866618\") " Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.949035 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-utilities" (OuterVolumeSpecName: "utilities") pod "1863a776-f6b2-4c2d-a30a-9a4f89866618" (UID: "1863a776-f6b2-4c2d-a30a-9a4f89866618"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.956524 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1863a776-f6b2-4c2d-a30a-9a4f89866618-kube-api-access-5gtr8" (OuterVolumeSpecName: "kube-api-access-5gtr8") pod "1863a776-f6b2-4c2d-a30a-9a4f89866618" (UID: "1863a776-f6b2-4c2d-a30a-9a4f89866618"). InnerVolumeSpecName "kube-api-access-5gtr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:44:54 crc kubenswrapper[4909]: I1126 09:44:54.997834 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1863a776-f6b2-4c2d-a30a-9a4f89866618" (UID: "1863a776-f6b2-4c2d-a30a-9a4f89866618"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.049534 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.049580 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1863a776-f6b2-4c2d-a30a-9a4f89866618-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.049665 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gtr8\" (UniqueName: \"kubernetes.io/projected/1863a776-f6b2-4c2d-a30a-9a4f89866618-kube-api-access-5gtr8\") on node \"crc\" DevicePath \"\"" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.375513 4909 generic.go:334] "Generic (PLEG): container finished" podID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerID="84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3" exitCode=0 Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.375557 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerDied","Data":"84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3"} Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.375584 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4gmd" event={"ID":"1863a776-f6b2-4c2d-a30a-9a4f89866618","Type":"ContainerDied","Data":"de40678718b5acecae288b29cd8568118635e8c7df8bf2ea1579b728233e3b26"} Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.375606 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4gmd" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.375625 4909 scope.go:117] "RemoveContainer" containerID="84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.397150 4909 scope.go:117] "RemoveContainer" containerID="d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.410073 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4gmd"] Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.420843 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d4gmd"] Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.442214 4909 scope.go:117] "RemoveContainer" containerID="6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.467550 4909 scope.go:117] "RemoveContainer" containerID="84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3" Nov 26 09:44:55 crc kubenswrapper[4909]: E1126 09:44:55.468360 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3\": container with ID starting with 84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3 not found: ID does not exist" containerID="84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.468393 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3"} err="failed to get container status \"84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3\": rpc error: code = NotFound desc = could not find container \"84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3\": container with ID starting with 84fab94190efada876c3c217fb3284955d1adb42769202b767b8168958fca6a3 not found: ID does not exist" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.468413 4909 scope.go:117] "RemoveContainer" containerID="d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a" Nov 26 09:44:55 crc kubenswrapper[4909]: E1126 09:44:55.468734 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a\": container with ID starting with d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a not found: ID does not exist" containerID="d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.468754 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a"} err="failed to get container status \"d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a\": rpc error: code = NotFound desc = could not find container \"d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a\": container with ID starting with d731b2cb6df9ee58d80ced92c5fcf3cab4319fe0efb263638661fea6d2f0673a not found: ID does not exist" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.468766 4909 scope.go:117] "RemoveContainer" containerID="6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25" Nov 26 09:44:55 crc kubenswrapper[4909]: E1126 09:44:55.468937 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25\": container with ID starting with 6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25 not found: ID does not exist" containerID="6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25" Nov 26 09:44:55 crc kubenswrapper[4909]: I1126 09:44:55.468961 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25"} err="failed to get container status \"6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25\": rpc error: code = NotFound desc = could not find container \"6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25\": container with ID starting with 6829aa2e144a94ade3a26c52e993a1a9d22a903f525fbf161cffa1d6f1586f25 not found: ID does not exist" Nov 26 09:44:56 crc kubenswrapper[4909]: I1126 09:44:56.499476 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:44:56 crc kubenswrapper[4909]: E1126 09:44:56.500137 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:44:56 crc kubenswrapper[4909]: I1126 09:44:56.513763 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" path="/var/lib/kubelet/pods/1863a776-f6b2-4c2d-a30a-9a4f89866618/volumes" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.153068 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p"] Nov 26 09:45:00 crc kubenswrapper[4909]: E1126 09:45:00.154453 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="extract-utilities" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.154479 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="extract-utilities" Nov 26 09:45:00 crc kubenswrapper[4909]: E1126 09:45:00.154536 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="registry-server" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.154549 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="registry-server" Nov 26 09:45:00 crc kubenswrapper[4909]: E1126 09:45:00.154584 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="extract-content" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.154625 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="extract-content" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.154984 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="1863a776-f6b2-4c2d-a30a-9a4f89866618" containerName="registry-server" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.156525 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.159257 4909 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.161421 4909 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.165308 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p"] Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.258857 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53d278db-2eb6-435b-a053-67db8c67bb76-secret-volume\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.258916 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d278db-2eb6-435b-a053-67db8c67bb76-config-volume\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.258953 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22w9r\" (UniqueName: \"kubernetes.io/projected/53d278db-2eb6-435b-a053-67db8c67bb76-kube-api-access-22w9r\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.360633 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53d278db-2eb6-435b-a053-67db8c67bb76-secret-volume\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.360675 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d278db-2eb6-435b-a053-67db8c67bb76-config-volume\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.360743 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22w9r\" (UniqueName: \"kubernetes.io/projected/53d278db-2eb6-435b-a053-67db8c67bb76-kube-api-access-22w9r\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.361993 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d278db-2eb6-435b-a053-67db8c67bb76-config-volume\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.637368 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53d278db-2eb6-435b-a053-67db8c67bb76-secret-volume\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.637881 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22w9r\" (UniqueName: \"kubernetes.io/projected/53d278db-2eb6-435b-a053-67db8c67bb76-kube-api-access-22w9r\") pod \"collect-profiles-29402505-fsk7p\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:00 crc kubenswrapper[4909]: I1126 09:45:00.785766 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:01 crc kubenswrapper[4909]: I1126 09:45:01.307011 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p"] Nov 26 09:45:01 crc kubenswrapper[4909]: I1126 09:45:01.456266 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" event={"ID":"53d278db-2eb6-435b-a053-67db8c67bb76","Type":"ContainerStarted","Data":"af592afc73cbbc2dfdad46b5266a7c1bfb6cb8f18a80ea1c288cc69eeeabb34d"} Nov 26 09:45:02 crc kubenswrapper[4909]: I1126 09:45:02.466650 4909 generic.go:334] "Generic (PLEG): container finished" podID="53d278db-2eb6-435b-a053-67db8c67bb76" containerID="cb20d07b526e33082659bd2e8e37241ab76cd27fd0c9a2d763c6db0d1a0ec56a" exitCode=0 Nov 26 09:45:02 crc kubenswrapper[4909]: I1126 09:45:02.466763 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" event={"ID":"53d278db-2eb6-435b-a053-67db8c67bb76","Type":"ContainerDied","Data":"cb20d07b526e33082659bd2e8e37241ab76cd27fd0c9a2d763c6db0d1a0ec56a"} Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.855471 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.950059 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22w9r\" (UniqueName: \"kubernetes.io/projected/53d278db-2eb6-435b-a053-67db8c67bb76-kube-api-access-22w9r\") pod \"53d278db-2eb6-435b-a053-67db8c67bb76\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.950247 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d278db-2eb6-435b-a053-67db8c67bb76-config-volume\") pod \"53d278db-2eb6-435b-a053-67db8c67bb76\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.950394 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53d278db-2eb6-435b-a053-67db8c67bb76-secret-volume\") pod \"53d278db-2eb6-435b-a053-67db8c67bb76\" (UID: \"53d278db-2eb6-435b-a053-67db8c67bb76\") " Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.951110 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53d278db-2eb6-435b-a053-67db8c67bb76-config-volume" (OuterVolumeSpecName: "config-volume") pod "53d278db-2eb6-435b-a053-67db8c67bb76" (UID: "53d278db-2eb6-435b-a053-67db8c67bb76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.951800 4909 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d278db-2eb6-435b-a053-67db8c67bb76-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.956009 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d278db-2eb6-435b-a053-67db8c67bb76-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "53d278db-2eb6-435b-a053-67db8c67bb76" (UID: "53d278db-2eb6-435b-a053-67db8c67bb76"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 09:45:03 crc kubenswrapper[4909]: I1126 09:45:03.956387 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d278db-2eb6-435b-a053-67db8c67bb76-kube-api-access-22w9r" (OuterVolumeSpecName: "kube-api-access-22w9r") pod "53d278db-2eb6-435b-a053-67db8c67bb76" (UID: "53d278db-2eb6-435b-a053-67db8c67bb76"). InnerVolumeSpecName "kube-api-access-22w9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:45:04 crc kubenswrapper[4909]: I1126 09:45:04.053946 4909 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53d278db-2eb6-435b-a053-67db8c67bb76-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 09:45:04 crc kubenswrapper[4909]: I1126 09:45:04.053996 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22w9r\" (UniqueName: \"kubernetes.io/projected/53d278db-2eb6-435b-a053-67db8c67bb76-kube-api-access-22w9r\") on node \"crc\" DevicePath \"\"" Nov 26 09:45:04 crc kubenswrapper[4909]: I1126 09:45:04.488427 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" event={"ID":"53d278db-2eb6-435b-a053-67db8c67bb76","Type":"ContainerDied","Data":"af592afc73cbbc2dfdad46b5266a7c1bfb6cb8f18a80ea1c288cc69eeeabb34d"} Nov 26 09:45:04 crc kubenswrapper[4909]: I1126 09:45:04.488839 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af592afc73cbbc2dfdad46b5266a7c1bfb6cb8f18a80ea1c288cc69eeeabb34d" Nov 26 09:45:04 crc kubenswrapper[4909]: I1126 09:45:04.488925 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402505-fsk7p" Nov 26 09:45:04 crc kubenswrapper[4909]: I1126 09:45:04.964650 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh"] Nov 26 09:45:04 crc kubenswrapper[4909]: I1126 09:45:04.979764 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402460-drfkh"] Nov 26 09:45:06 crc kubenswrapper[4909]: I1126 09:45:06.515408 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85617b4e-fe64-46d6-8ca8-7201a5012e8f" path="/var/lib/kubelet/pods/85617b4e-fe64-46d6-8ca8-7201a5012e8f/volumes" Nov 26 09:45:09 crc kubenswrapper[4909]: I1126 09:45:09.499420 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:45:09 crc kubenswrapper[4909]: E1126 09:45:09.500367 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.576305 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j5t69"] Nov 26 09:45:19 crc kubenswrapper[4909]: E1126 09:45:19.577694 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d278db-2eb6-435b-a053-67db8c67bb76" containerName="collect-profiles" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.577713 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d278db-2eb6-435b-a053-67db8c67bb76" containerName="collect-profiles" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.578022 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d278db-2eb6-435b-a053-67db8c67bb76" containerName="collect-profiles" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.582942 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.627915 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5t69"] Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.705823 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-catalog-content\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.705973 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-utilities\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.706046 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfb2g\" (UniqueName: \"kubernetes.io/projected/41d7d90b-3239-4628-91eb-0dce2cce5663-kube-api-access-xfb2g\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.807713 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-utilities\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.807789 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfb2g\" (UniqueName: \"kubernetes.io/projected/41d7d90b-3239-4628-91eb-0dce2cce5663-kube-api-access-xfb2g\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.807931 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-catalog-content\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.808272 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-utilities\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.808319 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-catalog-content\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.826378 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfb2g\" (UniqueName: \"kubernetes.io/projected/41d7d90b-3239-4628-91eb-0dce2cce5663-kube-api-access-xfb2g\") pod \"redhat-operators-j5t69\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:19 crc kubenswrapper[4909]: I1126 09:45:19.927619 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:20 crc kubenswrapper[4909]: I1126 09:45:20.433369 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5t69"] Nov 26 09:45:20 crc kubenswrapper[4909]: I1126 09:45:20.658015 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerStarted","Data":"d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e"} Nov 26 09:45:20 crc kubenswrapper[4909]: I1126 09:45:20.658061 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerStarted","Data":"fbaf22c7780a832ff3e012a051340e269bdf54aec6045cbf266086a612e6e688"} Nov 26 09:45:21 crc kubenswrapper[4909]: I1126 09:45:21.669141 4909 generic.go:334] "Generic (PLEG): container finished" podID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerID="d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e" exitCode=0 Nov 26 09:45:21 crc kubenswrapper[4909]: I1126 09:45:21.669345 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerDied","Data":"d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e"} Nov 26 09:45:22 crc kubenswrapper[4909]: I1126 09:45:22.680781 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerStarted","Data":"df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e"} Nov 26 09:45:23 crc kubenswrapper[4909]: I1126 09:45:23.508099 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:45:23 crc kubenswrapper[4909]: E1126 09:45:23.508605 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:45:26 crc kubenswrapper[4909]: I1126 09:45:26.730833 4909 generic.go:334] "Generic (PLEG): container finished" podID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerID="df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e" exitCode=0 Nov 26 09:45:26 crc kubenswrapper[4909]: I1126 09:45:26.730897 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerDied","Data":"df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e"} Nov 26 09:45:28 crc kubenswrapper[4909]: I1126 09:45:28.760177 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerStarted","Data":"be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6"} Nov 26 09:45:28 crc kubenswrapper[4909]: I1126 09:45:28.786139 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j5t69" podStartSLOduration=3.894607792 podStartE2EDuration="9.786120762s" podCreationTimestamp="2025-11-26 09:45:19 +0000 UTC" firstStartedPulling="2025-11-26 09:45:21.671934256 +0000 UTC m=+9893.818145422" lastFinishedPulling="2025-11-26 09:45:27.563447226 +0000 UTC m=+9899.709658392" observedRunningTime="2025-11-26 09:45:28.777154565 +0000 UTC m=+9900.923365751" watchObservedRunningTime="2025-11-26 09:45:28.786120762 +0000 UTC m=+9900.932331928" Nov 26 09:45:29 crc kubenswrapper[4909]: I1126 09:45:29.928264 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:29 crc kubenswrapper[4909]: I1126 09:45:29.928740 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:31 crc kubenswrapper[4909]: I1126 09:45:31.005124 4909 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j5t69" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="registry-server" probeResult="failure" output=< Nov 26 09:45:31 crc kubenswrapper[4909]: timeout: failed to connect service ":50051" within 1s Nov 26 09:45:31 crc kubenswrapper[4909]: > Nov 26 09:45:35 crc kubenswrapper[4909]: I1126 09:45:35.499732 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:45:35 crc kubenswrapper[4909]: E1126 09:45:35.501373 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:45:39 crc kubenswrapper[4909]: I1126 09:45:39.876199 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b69a9f41-2ca0-413c-bf21-9dd70af4e486/init-config-reloader/0.log" Nov 26 09:45:39 crc kubenswrapper[4909]: I1126 09:45:39.982930 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.030965 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.177934 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b69a9f41-2ca0-413c-bf21-9dd70af4e486/alertmanager/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.220886 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b69a9f41-2ca0-413c-bf21-9dd70af4e486/init-config-reloader/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.225161 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j5t69"] Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.243547 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b69a9f41-2ca0-413c-bf21-9dd70af4e486/config-reloader/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.384798 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_820cfd9c-2ab4-4660-90d8-5664ba4ae34e/aodh-api/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.467305 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_820cfd9c-2ab4-4660-90d8-5664ba4ae34e/aodh-evaluator/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.490728 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_820cfd9c-2ab4-4660-90d8-5664ba4ae34e/aodh-listener/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.606545 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_820cfd9c-2ab4-4660-90d8-5664ba4ae34e/aodh-notifier/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.708025 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-ff6d88966-pkkdc_e2258ed3-c9bd-4150-a1fb-f26c31771be2/barbican-api/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.733737 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-ff6d88966-pkkdc_e2258ed3-c9bd-4150-a1fb-f26c31771be2/barbican-api-log/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.902009 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66ffdb4466-s6kpl_b903a0f7-c1a1-43fb-abb8-bb7d83239317/barbican-keystone-listener-log/0.log" Nov 26 09:45:40 crc kubenswrapper[4909]: I1126 09:45:40.927446 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66ffdb4466-s6kpl_b903a0f7-c1a1-43fb-abb8-bb7d83239317/barbican-keystone-listener/0.log" Nov 26 09:45:41 crc kubenswrapper[4909]: I1126 09:45:41.091331 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-668bb44595-lkzgp_aabbbfa5-7718-49d1-82ae-7b79cd170efb/barbican-worker/0.log" Nov 26 09:45:41 crc kubenswrapper[4909]: I1126 09:45:41.133694 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-668bb44595-lkzgp_aabbbfa5-7718-49d1-82ae-7b79cd170efb/barbican-worker-log/0.log" Nov 26 09:45:41 crc kubenswrapper[4909]: I1126 09:45:41.902966 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j5t69" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="registry-server" containerID="cri-o://be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6" gracePeriod=2 Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.279077 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92b4fe15-bb71-47cc-8560-763176a1a666/ceilometer-central-agent/0.log" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.347574 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-dclj2_03058c3f-9b59-4c2c-ada7-8291a75dae01/bootstrap-openstack-openstack-cell1/0.log" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.550310 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.570547 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92b4fe15-bb71-47cc-8560-763176a1a666/ceilometer-notification-agent/0.log" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.630726 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92b4fe15-bb71-47cc-8560-763176a1a666/sg-core/0.log" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.633801 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92b4fe15-bb71-47cc-8560-763176a1a666/proxy-httpd/0.log" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.661947 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfb2g\" (UniqueName: \"kubernetes.io/projected/41d7d90b-3239-4628-91eb-0dce2cce5663-kube-api-access-xfb2g\") pod \"41d7d90b-3239-4628-91eb-0dce2cce5663\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.662081 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-utilities\") pod \"41d7d90b-3239-4628-91eb-0dce2cce5663\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.662111 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-catalog-content\") pod \"41d7d90b-3239-4628-91eb-0dce2cce5663\" (UID: \"41d7d90b-3239-4628-91eb-0dce2cce5663\") " Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.662978 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-utilities" (OuterVolumeSpecName: "utilities") pod "41d7d90b-3239-4628-91eb-0dce2cce5663" (UID: "41d7d90b-3239-4628-91eb-0dce2cce5663"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.672713 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d7d90b-3239-4628-91eb-0dce2cce5663-kube-api-access-xfb2g" (OuterVolumeSpecName: "kube-api-access-xfb2g") pod "41d7d90b-3239-4628-91eb-0dce2cce5663" (UID: "41d7d90b-3239-4628-91eb-0dce2cce5663"). InnerVolumeSpecName "kube-api-access-xfb2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.751064 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41d7d90b-3239-4628-91eb-0dce2cce5663" (UID: "41d7d90b-3239-4628-91eb-0dce2cce5663"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.764283 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfb2g\" (UniqueName: \"kubernetes.io/projected/41d7d90b-3239-4628-91eb-0dce2cce5663-kube-api-access-xfb2g\") on node \"crc\" DevicePath \"\"" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.764314 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.764325 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7d90b-3239-4628-91eb-0dce2cce5663-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.798193 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-rnmcj_097830ef-7c28-40bd-b183-d395c23b463c/ceph-client-openstack-openstack-cell1/0.log" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.913128 4909 generic.go:334] "Generic (PLEG): container finished" podID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerID="be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6" exitCode=0 Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.913171 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerDied","Data":"be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6"} Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.913195 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5t69" event={"ID":"41d7d90b-3239-4628-91eb-0dce2cce5663","Type":"ContainerDied","Data":"fbaf22c7780a832ff3e012a051340e269bdf54aec6045cbf266086a612e6e688"} Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.913210 4909 scope.go:117] "RemoveContainer" containerID="be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.913338 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5t69" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.940947 4909 scope.go:117] "RemoveContainer" containerID="df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.953152 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j5t69"] Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.976187 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f91a650-01b7-47d1-9410-a47b9408c634/cinder-api/0.log" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.988924 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j5t69"] Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.989714 4909 scope.go:117] "RemoveContainer" containerID="d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e" Nov 26 09:45:42 crc kubenswrapper[4909]: I1126 09:45:42.997329 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2f91a650-01b7-47d1-9410-a47b9408c634/cinder-api-log/0.log" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.018931 4909 scope.go:117] "RemoveContainer" containerID="be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6" Nov 26 09:45:43 crc kubenswrapper[4909]: E1126 09:45:43.019303 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6\": container with ID starting with be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6 not found: ID does not exist" containerID="be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.019336 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6"} err="failed to get container status \"be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6\": rpc error: code = NotFound desc = could not find container \"be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6\": container with ID starting with be9c1c8ce5134d21d4c4330047cfb92046998aeb455a543c33b6248d08f0e3a6 not found: ID does not exist" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.019364 4909 scope.go:117] "RemoveContainer" containerID="df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e" Nov 26 09:45:43 crc kubenswrapper[4909]: E1126 09:45:43.019679 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e\": container with ID starting with df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e not found: ID does not exist" containerID="df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.019705 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e"} err="failed to get container status \"df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e\": rpc error: code = NotFound desc = could not find container \"df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e\": container with ID starting with df046b4b26a97c17cf515d2dbe62a7589d659b733d463968f23c65023b04f47e not found: ID does not exist" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.019719 4909 scope.go:117] "RemoveContainer" containerID="d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e" Nov 26 09:45:43 crc kubenswrapper[4909]: E1126 09:45:43.019953 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e\": container with ID starting with d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e not found: ID does not exist" containerID="d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.019975 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e"} err="failed to get container status \"d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e\": rpc error: code = NotFound desc = could not find container \"d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e\": container with ID starting with d2db5b92c33edf28d5b011650b00ccf5e7372e73d9009b6c6f6c9a400074d04e not found: ID does not exist" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.268721 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d44ac0aa-a634-4189-a500-b1ead88f40e0/probe/0.log" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.287865 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d44ac0aa-a634-4189-a500-b1ead88f40e0/cinder-backup/0.log" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.411697 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2fc86a6a-a136-4600-b8b4-bf7f4baa45a8/cinder-scheduler/0.log" Nov 26 09:45:43 crc kubenswrapper[4909]: I1126 09:45:43.578182 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2fc86a6a-a136-4600-b8b4-bf7f4baa45a8/probe/0.log" Nov 26 09:45:44 crc kubenswrapper[4909]: I1126 09:45:44.251990 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-thnkk_6297fa9c-fc6c-4b1d-ab62-62e3f52004c3/configure-network-openstack-openstack-cell1/0.log" Nov 26 09:45:44 crc kubenswrapper[4909]: I1126 09:45:44.259315 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1ad3ebe0-0caa-449f-9980-0dbddd081302/probe/0.log" Nov 26 09:45:44 crc kubenswrapper[4909]: I1126 09:45:44.283204 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1ad3ebe0-0caa-449f-9980-0dbddd081302/cinder-volume/0.log" Nov 26 09:45:44 crc kubenswrapper[4909]: I1126 09:45:44.515675 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" path="/var/lib/kubelet/pods/41d7d90b-3239-4628-91eb-0dce2cce5663/volumes" Nov 26 09:45:44 crc kubenswrapper[4909]: I1126 09:45:44.758907 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df8f9c6bc-25vnv_0fa8db2c-a313-4764-abb5-3741865c6112/init/0.log" Nov 26 09:45:44 crc kubenswrapper[4909]: I1126 09:45:44.782635 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-kvf2x_4eb1dd46-2b50-4cee-b40e-0499b60dd32c/configure-os-openstack-openstack-cell1/0.log" Nov 26 09:45:44 crc kubenswrapper[4909]: I1126 09:45:44.960507 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df8f9c6bc-25vnv_0fa8db2c-a313-4764-abb5-3741865c6112/init/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.008313 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df8f9c6bc-25vnv_0fa8db2c-a313-4764-abb5-3741865c6112/dnsmasq-dns/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.057427 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-dh5c6_e0b9fe64-4d4f-46a3-849f-820bdf130897/download-cache-openstack-openstack-cell1/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.245982 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_16671caf-93a6-40ad-8f24-b053cb477b29/glance-httpd/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.285364 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_16671caf-93a6-40ad-8f24-b053cb477b29/glance-log/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.369806 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_17136d61-f21a-46a1-a2ef-565bed7c032f/glance-log/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.401869 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_17136d61-f21a-46a1-a2ef-565bed7c032f/glance-httpd/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.722724 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5bc58458dc-fkz9r_330f2a23-1b8e-4881-a458-e9d463c4383e/heat-api/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.743912 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-ff4df84b7-q74lm_f22a8f9e-56d3-42bd-9d3d-fbcef3c2bef4/heat-cfnapi/0.log" Nov 26 09:45:45 crc kubenswrapper[4909]: I1126 09:45:45.820207 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7f8d6cd4db-jq8c7_d230c25a-c148-4549-9d86-60b46e6e5145/heat-engine/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.057510 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bd886c577-ttt6q_d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb/horizon/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.083052 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bd886c577-ttt6q_d7d842d5-82ae-4ee0-8c27-9bd1c5c3d1bb/horizon-log/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.138730 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-vwl5m_62dd5e07-614e-4604-a806-0464413c77f5/install-certs-openstack-openstack-cell1/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.351782 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-znphg_aaf6fcf3-bb6b-4c6a-9a85-91885140e70d/install-os-openstack-openstack-cell1/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.440079 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c66bd4b5c-xlb68_bba4a087-0b07-4a45-b46d-989e7681e1d0/keystone-api/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.535609 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29402461-nkbfr_c025b17f-fdf8-4946-b88b-b33958ad8d0f/keystone-cron/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.658855 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5c5a4076-8f8e-4924-bb54-e47258b70aac/kube-state-metrics/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.718914 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-7zc8t_c12a232c-8572-40da-bd58-1f46eab0d5b4/libvirt-openstack-openstack-cell1/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.860813 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_17f3de1a-d37e-43b1-9882-2d0678d3839b/manila-api-log/0.log" Nov 26 09:45:46 crc kubenswrapper[4909]: I1126 09:45:46.941789 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_17f3de1a-d37e-43b1-9882-2d0678d3839b/manila-api/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.014674 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_83c9363e-4ca4-4b81-8470-651bdb6f7c28/manila-scheduler/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.068586 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_83c9363e-4ca4-4b81-8470-651bdb6f7c28/probe/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.203638 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_d5e6333d-03ca-438f-882f-b3415c11e3fc/manila-share/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.269441 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_d5e6333d-03ca-438f-882f-b3415c11e3fc/probe/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.585919 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4496fbbf-ngkvc_3a087136-4700-48b9-b87c-0bc79ca50f55/neutron-api/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.598493 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4496fbbf-ngkvc_3a087136-4700-48b9-b87c-0bc79ca50f55/neutron-httpd/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.844112 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-flb4r_01fc94ad-49dd-4014-9145-beddf1a52403/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 26 09:45:47 crc kubenswrapper[4909]: I1126 09:45:47.906526 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-8jtc5_06f085fc-7566-4e13-8b58-9d2385e57def/neutron-metadata-openstack-openstack-cell1/0.log" Nov 26 09:45:48 crc kubenswrapper[4909]: I1126 09:45:48.132880 4909 scope.go:117] "RemoveContainer" containerID="1e4ce7f84a4db07e739ee4e1c4046712e24ebf8306f10cdc516f5cce3991c54d" Nov 26 09:45:48 crc kubenswrapper[4909]: I1126 09:45:48.460573 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-2jjv5_57f07bac-a5ba-488c-91f2-e925ad366f26/neutron-sriov-openstack-openstack-cell1/0.log" Nov 26 09:45:48 crc kubenswrapper[4909]: I1126 09:45:48.495851 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c0c9c1db-492a-44cb-9eb2-756ddcd00876/nova-api-api/0.log" Nov 26 09:45:48 crc kubenswrapper[4909]: I1126 09:45:48.585101 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c0c9c1db-492a-44cb-9eb2-756ddcd00876/nova-api-log/0.log" Nov 26 09:45:48 crc kubenswrapper[4909]: I1126 09:45:48.840243 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7a6fca93-019e-4019-a170-fc4bd6c68530/nova-cell0-conductor-conductor/0.log" Nov 26 09:45:48 crc kubenswrapper[4909]: I1126 09:45:48.978794 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4188ae86-25b4-429a-a042-906a5b04ea81/nova-cell1-conductor-conductor/0.log" Nov 26 09:45:49 crc kubenswrapper[4909]: I1126 09:45:49.230987 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_07294b7f-cf09-4c22-a428-5c25bb75ae6f/nova-cell1-novncproxy-novncproxy/0.log" Nov 26 09:45:49 crc kubenswrapper[4909]: I1126 09:45:49.364466 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellhrrff_b787ec2d-08c2-4282-9a94-fe5dc36fb14c/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 26 09:45:49 crc kubenswrapper[4909]: I1126 09:45:49.499392 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:45:49 crc kubenswrapper[4909]: E1126 09:45:49.499809 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:45:49 crc kubenswrapper[4909]: I1126 09:45:49.673450 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-dmw74_e523aac5-088b-427f-890e-90ad45a407f6/nova-cell1-openstack-openstack-cell1/0.log" Nov 26 09:45:49 crc kubenswrapper[4909]: I1126 09:45:49.832062 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cbb5caa6-8215-4021-91b6-1d27967f571d/nova-metadata-metadata/0.log" Nov 26 09:45:49 crc kubenswrapper[4909]: I1126 09:45:49.913323 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cbb5caa6-8215-4021-91b6-1d27967f571d/nova-metadata-log/0.log" Nov 26 09:45:50 crc kubenswrapper[4909]: I1126 09:45:50.174064 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2105779c-8f18-4582-ad9e-e071b51f7dbc/nova-scheduler-scheduler/0.log" Nov 26 09:45:50 crc kubenswrapper[4909]: I1126 09:45:50.253445 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-578cc99bcb-vf9qr_01af70d6-bbde-4669-b93c-c06719d58742/init/0.log" Nov 26 09:45:50 crc kubenswrapper[4909]: I1126 09:45:50.838671 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jtsrx_427f7e74-637f-4c5b-be23-132aaf076de2/init/0.log" Nov 26 09:45:50 crc kubenswrapper[4909]: I1126 09:45:50.847868 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-578cc99bcb-vf9qr_01af70d6-bbde-4669-b93c-c06719d58742/octavia-api-provider-agent/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.020860 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-578cc99bcb-vf9qr_01af70d6-bbde-4669-b93c-c06719d58742/init/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.091771 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-578cc99bcb-vf9qr_01af70d6-bbde-4669-b93c-c06719d58742/octavia-api/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.213970 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jtsrx_427f7e74-637f-4c5b-be23-132aaf076de2/octavia-healthmanager/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.251632 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jtsrx_427f7e74-637f-4c5b-be23-132aaf076de2/init/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.382122 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-rbx7b_2efa071d-456a-4c34-aa73-1da5e9efd3f3/init/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.529480 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-rbx7b_2efa071d-456a-4c34-aa73-1da5e9efd3f3/init/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.613233 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-rbx7b_2efa071d-456a-4c34-aa73-1da5e9efd3f3/octavia-housekeeping/0.log" Nov 26 09:45:51 crc kubenswrapper[4909]: I1126 09:45:51.671905 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-wzr7s_b481a95e-cbfb-446b-9229-3dff4536d732/init/0.log" Nov 26 09:45:52 crc kubenswrapper[4909]: I1126 09:45:52.664264 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-wzr7s_b481a95e-cbfb-446b-9229-3dff4536d732/init/0.log" Nov 26 09:45:52 crc kubenswrapper[4909]: I1126 09:45:52.678484 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-58r8d_28bc7581-0e69-42a3-b086-a83b1e730ee1/init/0.log" Nov 26 09:45:52 crc kubenswrapper[4909]: I1126 09:45:52.767386 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-wzr7s_b481a95e-cbfb-446b-9229-3dff4536d732/octavia-rsyslog/0.log" Nov 26 09:45:53 crc kubenswrapper[4909]: I1126 09:45:53.114208 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0030125a-9381-4664-9a8f-bcc4a9a812e7/mysql-bootstrap/0.log" Nov 26 09:45:53 crc kubenswrapper[4909]: I1126 09:45:53.167159 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-58r8d_28bc7581-0e69-42a3-b086-a83b1e730ee1/init/0.log" Nov 26 09:45:53 crc kubenswrapper[4909]: I1126 09:45:53.201488 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-58r8d_28bc7581-0e69-42a3-b086-a83b1e730ee1/octavia-worker/0.log" Nov 26 09:45:53 crc kubenswrapper[4909]: I1126 09:45:53.392132 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0030125a-9381-4664-9a8f-bcc4a9a812e7/mysql-bootstrap/0.log" Nov 26 09:45:53 crc kubenswrapper[4909]: I1126 09:45:53.462325 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0030125a-9381-4664-9a8f-bcc4a9a812e7/galera/0.log" Nov 26 09:45:53 crc kubenswrapper[4909]: I1126 09:45:53.554694 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e194b71-c30a-4d1e-bc5e-acfb949134f9/mysql-bootstrap/0.log" Nov 26 09:45:54 crc kubenswrapper[4909]: I1126 09:45:54.156980 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e194b71-c30a-4d1e-bc5e-acfb949134f9/mysql-bootstrap/0.log" Nov 26 09:45:54 crc kubenswrapper[4909]: I1126 09:45:54.238161 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e194b71-c30a-4d1e-bc5e-acfb949134f9/galera/0.log" Nov 26 09:45:54 crc kubenswrapper[4909]: I1126 09:45:54.277112 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_bcdfadfe-a37d-400e-8a94-e28e2685cc92/openstackclient/0.log" Nov 26 09:45:54 crc kubenswrapper[4909]: I1126 09:45:54.460865 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8jkx8_1af0814b-2284-43ed-b8bc-91736abd63ac/ovn-controller/0.log" Nov 26 09:45:54 crc kubenswrapper[4909]: I1126 09:45:54.609651 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tzvzv_c4bbdf2b-e4c8-4453-b471-c11f1421d401/openstack-network-exporter/0.log" Nov 26 09:45:54 crc kubenswrapper[4909]: I1126 09:45:54.731001 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-79cx4_88c8deef-c53d-48d5-8716-4614abbd88e0/ovsdb-server-init/0.log" Nov 26 09:45:54 crc kubenswrapper[4909]: I1126 09:45:54.958508 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-79cx4_88c8deef-c53d-48d5-8716-4614abbd88e0/ovsdb-server-init/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.015212 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-79cx4_88c8deef-c53d-48d5-8716-4614abbd88e0/ovsdb-server/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.094424 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-79cx4_88c8deef-c53d-48d5-8716-4614abbd88e0/ovs-vswitchd/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.253856 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1738be25-4013-47d1-b3c0-28ba45749d59/openstack-network-exporter/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.306247 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1738be25-4013-47d1-b3c0-28ba45749d59/ovn-northd/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.502139 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-7gzdn_ede0bcc4-4c9a-43fb-b6f6-c32aa1f43e4f/ovn-openstack-openstack-cell1/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.600611 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e6b24639-0e06-417d-af87-ebf5829602d1/openstack-network-exporter/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.807733 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8873dff6-99b5-4363-89bd-26a68d88372c/openstack-network-exporter/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.811285 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e6b24639-0e06-417d-af87-ebf5829602d1/ovsdbserver-nb/0.log" Nov 26 09:45:55 crc kubenswrapper[4909]: I1126 09:45:55.976657 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8873dff6-99b5-4363-89bd-26a68d88372c/ovsdbserver-nb/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.011078 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_de361675-9fe3-4b71-99e2-13b199c00514/openstack-network-exporter/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.043917 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_de361675-9fe3-4b71-99e2-13b199c00514/ovsdbserver-nb/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.202865 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_52ff76a4-16e1-4823-b620-72dea8981fa1/openstack-network-exporter/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.278633 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_52ff76a4-16e1-4823-b620-72dea8981fa1/ovsdbserver-sb/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.705732 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_d415596f-0580-4a05-8eda-40af3771f654/openstack-network-exporter/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.721133 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_d415596f-0580-4a05-8eda-40af3771f654/ovsdbserver-sb/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.796911 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_502be803-759b-4d2a-93cc-10493cf5e482/openstack-network-exporter/0.log" Nov 26 09:45:56 crc kubenswrapper[4909]: I1126 09:45:56.947563 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_502be803-759b-4d2a-93cc-10493cf5e482/ovsdbserver-sb/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.134007 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4957bb88-wqqml_37b7c6b6-3229-4e8f-b403-8a57c3249e1e/placement-api/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.228831 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4957bb88-wqqml_37b7c6b6-3229-4e8f-b403-8a57c3249e1e/placement-log/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.306865 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c979gm_4a0598bc-22a7-47ca-af08-34d1f18acf20/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.415429 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f27faa88-3551-4b4c-a737-409c1ef02b7f/memcached/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.496806 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6ebb97fd-8fc5-484a-863b-043deb114430/init-config-reloader/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.651962 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6ebb97fd-8fc5-484a-863b-043deb114430/init-config-reloader/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.681911 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6ebb97fd-8fc5-484a-863b-043deb114430/config-reloader/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.710109 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6ebb97fd-8fc5-484a-863b-043deb114430/prometheus/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.742140 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6ebb97fd-8fc5-484a-863b-043deb114430/thanos-sidecar/0.log" Nov 26 09:45:57 crc kubenswrapper[4909]: I1126 09:45:57.879018 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3484134b-9037-4281-8f33-b61c0fcc4337/setup-container/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.061696 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3484134b-9037-4281-8f33-b61c0fcc4337/rabbitmq/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.078133 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3484134b-9037-4281-8f33-b61c0fcc4337/setup-container/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.096162 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b783ab6a-d590-4bf8-b577-aa676da17499/setup-container/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.338856 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b783ab6a-d590-4bf8-b577-aa676da17499/rabbitmq/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.343901 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-vsfdl_21301b54-6aca-4911-a8d3-1b346e9ae2c1/reboot-os-openstack-openstack-cell1/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.354371 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b783ab6a-d590-4bf8-b577-aa676da17499/setup-container/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.540952 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-9lqqb_31bd4baf-44de-4ad5-84cc-915eddf3a7da/run-os-openstack-openstack-cell1/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.573669 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-98vzd_1ede74bc-82e7-45ee-9592-663a43097439/ssh-known-hosts-openstack/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.695007 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-d5fkl_03a665a8-d345-4f57-b8fd-5d22c4d3804b/telemetry-openstack-openstack-cell1/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.847446 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-wm2tq_e0f2810b-5183-4439-88f2-7c47010a5aa9/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 26 09:45:58 crc kubenswrapper[4909]: I1126 09:45:58.944959 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-mj6p4_01317a74-5f88-42f3-bafe-bdaa599dc2f2/validate-network-openstack-openstack-cell1/0.log" Nov 26 09:46:04 crc kubenswrapper[4909]: I1126 09:46:04.498950 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:46:04 crc kubenswrapper[4909]: E1126 09:46:04.501675 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:46:17 crc kubenswrapper[4909]: I1126 09:46:17.499033 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:46:17 crc kubenswrapper[4909]: E1126 09:46:17.499996 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:46:22 crc kubenswrapper[4909]: I1126 09:46:22.544243 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt_9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98/util/0.log" Nov 26 09:46:22 crc kubenswrapper[4909]: I1126 09:46:22.769320 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt_9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98/pull/0.log" Nov 26 09:46:22 crc kubenswrapper[4909]: I1126 09:46:22.805713 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt_9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98/util/0.log" Nov 26 09:46:22 crc kubenswrapper[4909]: I1126 09:46:22.807680 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt_9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98/pull/0.log" Nov 26 09:46:22 crc kubenswrapper[4909]: I1126 09:46:22.992950 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt_9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98/extract/0.log" Nov 26 09:46:22 crc kubenswrapper[4909]: I1126 09:46:22.993102 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt_9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98/pull/0.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.055089 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_88ebbc69976311d08d63f27adeeb1447d26d826cf10611b3bd38eab968cf8kt_9bfe3bfb-ebaf-4bb6-a91d-66d8278dda98/util/0.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.219079 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-2cwgh_f7f77917-da54-4e82-a356-80000a53395a/kube-rbac-proxy/0.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.220961 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-2cwgh_f7f77917-da54-4e82-a356-80000a53395a/manager/3.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.259025 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-2cwgh_f7f77917-da54-4e82-a356-80000a53395a/manager/2.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.445488 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-2x9sp_138eaa02-be79-4e16-8627-cc582d5b6770/kube-rbac-proxy/0.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.465070 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-2x9sp_138eaa02-be79-4e16-8627-cc582d5b6770/manager/3.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.478062 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-2x9sp_138eaa02-be79-4e16-8627-cc582d5b6770/manager/2.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.673935 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-scqbd_b3ca7f6d-4dba-4e22-ae42-f4184932fba2/manager/2.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.693262 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-scqbd_b3ca7f6d-4dba-4e22-ae42-f4184932fba2/manager/3.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.720539 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-scqbd_b3ca7f6d-4dba-4e22-ae42-f4184932fba2/kube-rbac-proxy/0.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.888893 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bd966bbd4-6j4kw_cd83d237-7922-4458-9fce-8c296d0ccc0f/kube-rbac-proxy/0.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.949847 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bd966bbd4-6j4kw_cd83d237-7922-4458-9fce-8c296d0ccc0f/manager/2.log" Nov 26 09:46:23 crc kubenswrapper[4909]: I1126 09:46:23.983072 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bd966bbd4-6j4kw_cd83d237-7922-4458-9fce-8c296d0ccc0f/manager/3.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.085039 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-692sc_f4c87de0-5b1c-44f8-a2fb-1949a3f4af03/kube-rbac-proxy/0.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.187064 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-692sc_f4c87de0-5b1c-44f8-a2fb-1949a3f4af03/manager/3.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.224553 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-692sc_f4c87de0-5b1c-44f8-a2fb-1949a3f4af03/manager/2.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.329483 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-sphql_0ebad6d0-e522-4012-869e-903c89bd1703/kube-rbac-proxy/0.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.366564 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-sphql_0ebad6d0-e522-4012-869e-903c89bd1703/manager/3.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.430809 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-sphql_0ebad6d0-e522-4012-869e-903c89bd1703/manager/2.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.512655 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-577c5f6d94-d44wm_ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4/kube-rbac-proxy/0.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.662293 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-577c5f6d94-d44wm_ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4/manager/3.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.702850 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-577c5f6d94-d44wm_ef5bb2b0-bdf7-4b26-9df0-44d9993d02e4/manager/2.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.823834 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-8486p_8c9c6404-9f47-434c-ac1b-d08cd48d5156/kube-rbac-proxy/0.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.842649 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-8486p_8c9c6404-9f47-434c-ac1b-d08cd48d5156/manager/3.log" Nov 26 09:46:24 crc kubenswrapper[4909]: I1126 09:46:24.912667 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-8486p_8c9c6404-9f47-434c-ac1b-d08cd48d5156/manager/2.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.038080 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7d6f5d799-4gr4q_757566f7-a07b-4623-8668-b39f715ea7a9/kube-rbac-proxy/0.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.109182 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7d6f5d799-4gr4q_757566f7-a07b-4623-8668-b39f715ea7a9/manager/3.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.113734 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7d6f5d799-4gr4q_757566f7-a07b-4623-8668-b39f715ea7a9/manager/2.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.283390 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-phclr_9f41a032-71ff-4608-aa2c-b16469fe55a0/kube-rbac-proxy/0.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.331475 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-phclr_9f41a032-71ff-4608-aa2c-b16469fe55a0/manager/3.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.346200 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-phclr_9f41a032-71ff-4608-aa2c-b16469fe55a0/manager/2.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.549306 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-872rr_cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef/manager/2.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.553688 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-872rr_cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef/kube-rbac-proxy/0.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.604172 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-872rr_cc67a63f-59b1-4448-8ead-c7fdf5a1b0ef/manager/3.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.762073 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6b6c55ffd5-dhp84_af4a09dd-04e0-465d-a817-bacf1a52babe/kube-rbac-proxy/0.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.804469 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6b6c55ffd5-dhp84_af4a09dd-04e0-465d-a817-bacf1a52babe/manager/3.log" Nov 26 09:46:25 crc kubenswrapper[4909]: I1126 09:46:25.827807 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6b6c55ffd5-dhp84_af4a09dd-04e0-465d-a817-bacf1a52babe/manager/2.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.049253 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-swdlm_4a162aeb-8377-45aa-bd44-6b8aed2f93fb/manager/2.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.063944 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-swdlm_4a162aeb-8377-45aa-bd44-6b8aed2f93fb/manager/3.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.083581 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-swdlm_4a162aeb-8377-45aa-bd44-6b8aed2f93fb/kube-rbac-proxy/0.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.330340 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7979c68bc7-c696l_61289245-0b12-4689-8a98-2b24544cacf8/kube-rbac-proxy/0.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.346346 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7979c68bc7-c696l_61289245-0b12-4689-8a98-2b24544cacf8/manager/2.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.643973 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7979c68bc7-c696l_61289245-0b12-4689-8a98-2b24544cacf8/manager/3.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.760184 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-kdp8v_b68371f8-f38e-44e5-bd68-d059f1e3e89a/kube-rbac-proxy/0.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.788451 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-kdp8v_b68371f8-f38e-44e5-bd68-d059f1e3e89a/manager/1.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.793819 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-kdp8v_b68371f8-f38e-44e5-bd68-d059f1e3e89a/manager/0.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.928315 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68c78b6ff8-dmnlq_fea4eb2c-ad33-4504-a4e4-8c82875b2d0c/kube-rbac-proxy/0.log" Nov 26 09:46:26 crc kubenswrapper[4909]: I1126 09:46:26.975207 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68c78b6ff8-dmnlq_fea4eb2c-ad33-4504-a4e4-8c82875b2d0c/manager/2.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.149878 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6c945fd485-mgkgv_dd0d0446-c640-42e7-9ff6-e71e59e4a459/kube-rbac-proxy/0.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.280021 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6c945fd485-mgkgv_dd0d0446-c640-42e7-9ff6-e71e59e4a459/operator/1.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.315987 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6c945fd485-mgkgv_dd0d0446-c640-42e7-9ff6-e71e59e4a459/operator/0.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.414220 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nmqqp_bc7cd522-0eab-4a8a-9146-abdb0d13ed54/registry-server/0.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.633914 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-xcrzl_cad0b373-54da-4331-aa01-27d08edaa1ef/manager/3.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.637189 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-xcrzl_cad0b373-54da-4331-aa01-27d08edaa1ef/kube-rbac-proxy/0.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.689402 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-xcrzl_cad0b373-54da-4331-aa01-27d08edaa1ef/manager/2.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.893504 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-5t9sx_10e6987e-11d4-4c64-bc26-bb45590f3fff/manager/3.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.920373 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-5t9sx_10e6987e-11d4-4c64-bc26-bb45590f3fff/kube-rbac-proxy/0.log" Nov 26 09:46:27 crc kubenswrapper[4909]: I1126 09:46:27.940755 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-5t9sx_10e6987e-11d4-4c64-bc26-bb45590f3fff/manager/2.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.123693 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-w69tb_20a1b8f0-7e93-4d4a-b527-7470d128a2bc/operator/2.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.201292 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-cc9f5bc5c-kbwpk_5b985112-f6b3-4879-b02e-8ac0e510730b/kube-rbac-proxy/0.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.201698 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-w69tb_20a1b8f0-7e93-4d4a-b527-7470d128a2bc/operator/3.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.335245 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68c78b6ff8-dmnlq_fea4eb2c-ad33-4504-a4e4-8c82875b2d0c/manager/3.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.361889 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-cc9f5bc5c-kbwpk_5b985112-f6b3-4879-b02e-8ac0e510730b/manager/2.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.372257 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-cc9f5bc5c-kbwpk_5b985112-f6b3-4879-b02e-8ac0e510730b/manager/1.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.476347 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58487d9bf4-7rjcs_f8afd5eb-02e8-4a94-be0d-19a709270945/kube-rbac-proxy/0.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.549775 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58487d9bf4-7rjcs_f8afd5eb-02e8-4a94-be0d-19a709270945/manager/2.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.633194 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-bz9j9_365248fc-0b34-46df-bbdc-043f89694812/kube-rbac-proxy/0.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.649709 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58487d9bf4-7rjcs_f8afd5eb-02e8-4a94-be0d-19a709270945/manager/3.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.689611 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-bz9j9_365248fc-0b34-46df-bbdc-043f89694812/manager/1.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.758319 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-bz9j9_365248fc-0b34-46df-bbdc-043f89694812/manager/0.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.813688 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-fd6dq_0f99fe6f-9209-4c74-9bcb-619212d7812e/kube-rbac-proxy/0.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.864416 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-fd6dq_0f99fe6f-9209-4c74-9bcb-619212d7812e/manager/2.log" Nov 26 09:46:28 crc kubenswrapper[4909]: I1126 09:46:28.976673 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-fd6dq_0f99fe6f-9209-4c74-9bcb-619212d7812e/manager/1.log" Nov 26 09:46:32 crc kubenswrapper[4909]: I1126 09:46:32.499240 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:46:32 crc kubenswrapper[4909]: E1126 09:46:32.500162 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:46:45 crc kubenswrapper[4909]: I1126 09:46:45.499238 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:46:45 crc kubenswrapper[4909]: E1126 09:46:45.499971 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:46:46 crc kubenswrapper[4909]: I1126 09:46:46.078079 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9g6s4_22ae4443-3879-489b-a556-474a11712c47/control-plane-machine-set-operator/0.log" Nov 26 09:46:46 crc kubenswrapper[4909]: I1126 09:46:46.236303 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4d2g_56714b37-2c6a-42d6-8f7f-c8302a61bd6f/kube-rbac-proxy/0.log" Nov 26 09:46:46 crc kubenswrapper[4909]: I1126 09:46:46.281390 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4d2g_56714b37-2c6a-42d6-8f7f-c8302a61bd6f/machine-api-operator/0.log" Nov 26 09:47:00 crc kubenswrapper[4909]: I1126 09:47:00.253695 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4p4p2_ce540878-55f9-495e-8cc1-30402bb55d9f/cert-manager-controller/1.log" Nov 26 09:47:00 crc kubenswrapper[4909]: I1126 09:47:00.319192 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4p4p2_ce540878-55f9-495e-8cc1-30402bb55d9f/cert-manager-controller/0.log" Nov 26 09:47:00 crc kubenswrapper[4909]: I1126 09:47:00.498811 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:47:00 crc kubenswrapper[4909]: E1126 09:47:00.499076 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:47:00 crc kubenswrapper[4909]: I1126 09:47:00.499453 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-86dsq_ff1a0925-55ac-478f-a400-44391e090a1d/cert-manager-cainjector/1.log" Nov 26 09:47:00 crc kubenswrapper[4909]: I1126 09:47:00.522425 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-86dsq_ff1a0925-55ac-478f-a400-44391e090a1d/cert-manager-cainjector/2.log" Nov 26 09:47:00 crc kubenswrapper[4909]: I1126 09:47:00.648473 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bb6kf_0fb520c3-031d-4c32-af9e-b4cdb73e4851/cert-manager-webhook/0.log" Nov 26 09:47:13 crc kubenswrapper[4909]: I1126 09:47:13.499688 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:47:13 crc kubenswrapper[4909]: E1126 09:47:13.500340 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:47:14 crc kubenswrapper[4909]: I1126 09:47:14.267474 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-ntngv_7f022f0f-6f02-4652-8f76-44d162f8db2d/nmstate-console-plugin/0.log" Nov 26 09:47:14 crc kubenswrapper[4909]: I1126 09:47:14.443480 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tbjjq_84a91ab3-ee60-44e7-ba77-837689cfd490/nmstate-handler/0.log" Nov 26 09:47:14 crc kubenswrapper[4909]: I1126 09:47:14.538782 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-q29ws_c5d1ea9d-2001-418f-9b98-41cf8256a723/nmstate-metrics/0.log" Nov 26 09:47:14 crc kubenswrapper[4909]: I1126 09:47:14.564518 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-q29ws_c5d1ea9d-2001-418f-9b98-41cf8256a723/kube-rbac-proxy/0.log" Nov 26 09:47:14 crc kubenswrapper[4909]: I1126 09:47:14.703067 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-dsr5w_14291eb4-4810-4cb7-ba01-f62943f69090/nmstate-operator/0.log" Nov 26 09:47:14 crc kubenswrapper[4909]: I1126 09:47:14.764498 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-nd78x_2e1de2fd-7015-4de2-9689-d99deacc07b1/nmstate-webhook/0.log" Nov 26 09:47:26 crc kubenswrapper[4909]: I1126 09:47:26.499511 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:47:26 crc kubenswrapper[4909]: E1126 09:47:26.500362 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:47:30 crc kubenswrapper[4909]: I1126 09:47:30.881139 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-9glvr_6f633473-e125-4441-a526-ea45f81f39a3/kube-rbac-proxy/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.074235 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-xw4zq_b1858595-566b-40f9-bf2b-bb6e1bd5990a/frr-k8s-webhook-server/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.113840 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-frr-files/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.332851 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-9glvr_6f633473-e125-4441-a526-ea45f81f39a3/controller/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.424507 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-frr-files/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.452626 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-reloader/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.480786 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-metrics/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.573861 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-reloader/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.722433 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-metrics/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.736994 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-reloader/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.737433 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-frr-files/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.776214 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-metrics/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.942337 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-reloader/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.943527 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-frr-files/0.log" Nov 26 09:47:31 crc kubenswrapper[4909]: I1126 09:47:31.964471 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/cp-metrics/0.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.002325 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/controller/0.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.141273 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/frr-metrics/0.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.179302 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/kube-rbac-proxy/0.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.220788 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/kube-rbac-proxy-frr/0.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.344209 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/reloader/0.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.434240 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58dcdd989d-ctkx2_8ace07e4-e65b-451c-8623-f71b4f7d4f14/manager/3.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.626681 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58dcdd989d-ctkx2_8ace07e4-e65b-451c-8623-f71b4f7d4f14/manager/2.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.651087 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-76556765bb-nprm5_96f75acf-1983-407f-a5dc-cfcb53dc9dc7/webhook-server/0.log" Nov 26 09:47:32 crc kubenswrapper[4909]: I1126 09:47:32.840084 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpv4w_1c98a2e9-110c-44a2-8d31-39e894c7c759/kube-rbac-proxy/0.log" Nov 26 09:47:34 crc kubenswrapper[4909]: I1126 09:47:34.007635 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpv4w_1c98a2e9-110c-44a2-8d31-39e894c7c759/speaker/0.log" Nov 26 09:47:35 crc kubenswrapper[4909]: I1126 09:47:35.681258 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wf87k_33795123-6b00-438c-8dc7-b298f7c66924/frr/0.log" Nov 26 09:47:40 crc kubenswrapper[4909]: I1126 09:47:40.499462 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:47:40 crc kubenswrapper[4909]: E1126 09:47:40.500222 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:47:48 crc kubenswrapper[4909]: I1126 09:47:48.569152 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk_aad78080-9712-4159-9318-7b3eefb0cb7b/util/0.log" Nov 26 09:47:48 crc kubenswrapper[4909]: I1126 09:47:48.767983 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk_aad78080-9712-4159-9318-7b3eefb0cb7b/pull/0.log" Nov 26 09:47:48 crc kubenswrapper[4909]: I1126 09:47:48.791013 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk_aad78080-9712-4159-9318-7b3eefb0cb7b/util/0.log" Nov 26 09:47:48 crc kubenswrapper[4909]: I1126 09:47:48.841921 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk_aad78080-9712-4159-9318-7b3eefb0cb7b/pull/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.043095 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk_aad78080-9712-4159-9318-7b3eefb0cb7b/util/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.101632 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk_aad78080-9712-4159-9318-7b3eefb0cb7b/extract/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.166804 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bnnk_aad78080-9712-4159-9318-7b3eefb0cb7b/pull/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.269234 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd_5519a2e2-0cf0-441e-b9ed-32b3daf16fc9/util/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.447737 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd_5519a2e2-0cf0-441e-b9ed-32b3daf16fc9/util/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.448003 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd_5519a2e2-0cf0-441e-b9ed-32b3daf16fc9/pull/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.471121 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd_5519a2e2-0cf0-441e-b9ed-32b3daf16fc9/pull/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.643357 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd_5519a2e2-0cf0-441e-b9ed-32b3daf16fc9/util/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.644360 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd_5519a2e2-0cf0-441e-b9ed-32b3daf16fc9/pull/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.713864 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edw8nd_5519a2e2-0cf0-441e-b9ed-32b3daf16fc9/extract/0.log" Nov 26 09:47:49 crc kubenswrapper[4909]: I1126 09:47:49.820903 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h_dc198e11-71c6-418c-828e-908f9ff0243d/util/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.070625 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h_dc198e11-71c6-418c-828e-908f9ff0243d/util/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.097632 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h_dc198e11-71c6-418c-828e-908f9ff0243d/pull/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.104527 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h_dc198e11-71c6-418c-828e-908f9ff0243d/pull/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.263938 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h_dc198e11-71c6-418c-828e-908f9ff0243d/util/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.292902 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h_dc198e11-71c6-418c-828e-908f9ff0243d/extract/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.297672 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105dt6h_dc198e11-71c6-418c-828e-908f9ff0243d/pull/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.468653 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfn6t_9d1a9073-ad63-442c-b428-49b47ab69a83/extract-utilities/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.668429 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfn6t_9d1a9073-ad63-442c-b428-49b47ab69a83/extract-utilities/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.677415 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfn6t_9d1a9073-ad63-442c-b428-49b47ab69a83/extract-content/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.690712 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfn6t_9d1a9073-ad63-442c-b428-49b47ab69a83/extract-content/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.859949 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfn6t_9d1a9073-ad63-442c-b428-49b47ab69a83/extract-utilities/0.log" Nov 26 09:47:51 crc kubenswrapper[4909]: I1126 09:47:51.862385 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfn6t_9d1a9073-ad63-442c-b428-49b47ab69a83/extract-content/0.log" Nov 26 09:47:52 crc kubenswrapper[4909]: I1126 09:47:52.024398 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtq92_7ffc057a-aedf-4a50-a7a4-ae7360212301/extract-utilities/0.log" Nov 26 09:47:52 crc kubenswrapper[4909]: I1126 09:47:52.700736 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtq92_7ffc057a-aedf-4a50-a7a4-ae7360212301/extract-utilities/0.log" Nov 26 09:47:52 crc kubenswrapper[4909]: I1126 09:47:52.760814 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtq92_7ffc057a-aedf-4a50-a7a4-ae7360212301/extract-content/0.log" Nov 26 09:47:52 crc kubenswrapper[4909]: I1126 09:47:52.774225 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtq92_7ffc057a-aedf-4a50-a7a4-ae7360212301/extract-content/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.008477 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtq92_7ffc057a-aedf-4a50-a7a4-ae7360212301/extract-content/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.048357 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtq92_7ffc057a-aedf-4a50-a7a4-ae7360212301/extract-utilities/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.245198 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q_1e954cbc-c96d-4655-9098-340b6a9452d6/util/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.275088 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfn6t_9d1a9073-ad63-442c-b428-49b47ab69a83/registry-server/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.420831 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q_1e954cbc-c96d-4655-9098-340b6a9452d6/pull/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.451929 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q_1e954cbc-c96d-4655-9098-340b6a9452d6/pull/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.457380 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q_1e954cbc-c96d-4655-9098-340b6a9452d6/util/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.739440 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q_1e954cbc-c96d-4655-9098-340b6a9452d6/util/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.762044 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q_1e954cbc-c96d-4655-9098-340b6a9452d6/extract/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.765013 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gfr7q_1e954cbc-c96d-4655-9098-340b6a9452d6/pull/0.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.969797 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s7vvj_59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4/marketplace-operator/1.log" Nov 26 09:47:53 crc kubenswrapper[4909]: I1126 09:47:53.995414 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s7vvj_59fc50dc-e77e-4c40-b29a-c9d8f48ac4d4/marketplace-operator/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.063987 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnb2v_662bf7ae-d0e1-462d-9e20-b74af9087f01/extract-utilities/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.305489 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnb2v_662bf7ae-d0e1-462d-9e20-b74af9087f01/extract-utilities/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.305762 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnb2v_662bf7ae-d0e1-462d-9e20-b74af9087f01/extract-content/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.338609 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnb2v_662bf7ae-d0e1-462d-9e20-b74af9087f01/extract-content/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.397696 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtq92_7ffc057a-aedf-4a50-a7a4-ae7360212301/registry-server/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.534122 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnb2v_662bf7ae-d0e1-462d-9e20-b74af9087f01/extract-content/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.583565 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnb2v_662bf7ae-d0e1-462d-9e20-b74af9087f01/extract-utilities/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.638951 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7sxgl_105cf0ca-2270-45cc-b9ba-0e1cad52d688/extract-utilities/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.787203 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7sxgl_105cf0ca-2270-45cc-b9ba-0e1cad52d688/extract-utilities/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.829355 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnb2v_662bf7ae-d0e1-462d-9e20-b74af9087f01/registry-server/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.835124 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7sxgl_105cf0ca-2270-45cc-b9ba-0e1cad52d688/extract-content/0.log" Nov 26 09:47:54 crc kubenswrapper[4909]: I1126 09:47:54.858369 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7sxgl_105cf0ca-2270-45cc-b9ba-0e1cad52d688/extract-content/0.log" Nov 26 09:47:55 crc kubenswrapper[4909]: I1126 09:47:55.073814 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7sxgl_105cf0ca-2270-45cc-b9ba-0e1cad52d688/extract-utilities/0.log" Nov 26 09:47:55 crc kubenswrapper[4909]: I1126 09:47:55.075253 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7sxgl_105cf0ca-2270-45cc-b9ba-0e1cad52d688/extract-content/0.log" Nov 26 09:47:55 crc kubenswrapper[4909]: I1126 09:47:55.498467 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:47:55 crc kubenswrapper[4909]: E1126 09:47:55.498808 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:47:56 crc kubenswrapper[4909]: I1126 09:47:56.177769 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7sxgl_105cf0ca-2270-45cc-b9ba-0e1cad52d688/registry-server/0.log" Nov 26 09:48:07 crc kubenswrapper[4909]: I1126 09:48:07.499818 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:48:07 crc kubenswrapper[4909]: E1126 09:48:07.500739 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:48:08 crc kubenswrapper[4909]: I1126 09:48:08.106895 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-mn7t6_593f8066-ac54-40b8-a70d-4146a75a4615/prometheus-operator/0.log" Nov 26 09:48:08 crc kubenswrapper[4909]: I1126 09:48:08.243204 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bc88898-wp49f_b19d6b94-1bc6-4a00-9d9a-fe4cab3b15e8/prometheus-operator-admission-webhook/0.log" Nov 26 09:48:08 crc kubenswrapper[4909]: I1126 09:48:08.289005 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bc88898-ztv5k_8e5bba19-5d8e-4164-bb73-71cd72bc3f47/prometheus-operator-admission-webhook/0.log" Nov 26 09:48:08 crc kubenswrapper[4909]: I1126 09:48:08.458783 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-6zzck_4e5c3951-a95e-459a-99f6-3e405bb4d8f8/operator/0.log" Nov 26 09:48:08 crc kubenswrapper[4909]: I1126 09:48:08.575041 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-4sxd6_a697aca5-82ec-4422-9e17-7dfadbee7ab2/perses-operator/0.log" Nov 26 09:48:18 crc kubenswrapper[4909]: I1126 09:48:18.506993 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:48:18 crc kubenswrapper[4909]: E1126 09:48:18.507658 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:48:33 crc kubenswrapper[4909]: I1126 09:48:33.500142 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:48:33 crc kubenswrapper[4909]: E1126 09:48:33.500981 4909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4lffv_openshift-machine-config-operator(602939ce-1411-4a17-a42f-719afb7c6dd9)\"" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" Nov 26 09:48:34 crc kubenswrapper[4909]: E1126 09:48:34.093305 4909 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.206:44886->38.129.56.206:33469: write tcp 38.129.56.206:44886->38.129.56.206:33469: write: broken pipe Nov 26 09:48:44 crc kubenswrapper[4909]: I1126 09:48:44.499338 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:48:44 crc kubenswrapper[4909]: I1126 09:48:44.975999 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"ef342f58a0c945ed5951507c251ec85634dba79cb98f3708ddc79cb961a6ae1c"} Nov 26 09:48:48 crc kubenswrapper[4909]: I1126 09:48:48.305528 4909 scope.go:117] "RemoveContainer" containerID="e3165e6c8bd1bdab5df797951f0f9a0caaf3bd50716f920cbf20bffc00f4d020" Nov 26 09:50:19 crc kubenswrapper[4909]: I1126 09:50:19.693788 4909 generic.go:334] "Generic (PLEG): container finished" podID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerID="458b610cae6256dd0dbf2c99639e6f9e05a137a1001624c112161c1d9656cac3" exitCode=0 Nov 26 09:50:19 crc kubenswrapper[4909]: I1126 09:50:19.693910 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-887dn/must-gather-vhv4d" event={"ID":"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac","Type":"ContainerDied","Data":"458b610cae6256dd0dbf2c99639e6f9e05a137a1001624c112161c1d9656cac3"} Nov 26 09:50:19 crc kubenswrapper[4909]: I1126 09:50:19.695298 4909 scope.go:117] "RemoveContainer" containerID="458b610cae6256dd0dbf2c99639e6f9e05a137a1001624c112161c1d9656cac3" Nov 26 09:50:19 crc kubenswrapper[4909]: I1126 09:50:19.788848 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-887dn_must-gather-vhv4d_6648e3ee-fe30-46f7-ba4b-f9957e6e18ac/gather/0.log" Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.409285 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-887dn/must-gather-vhv4d"] Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.409943 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-887dn/must-gather-vhv4d" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerName="copy" containerID="cri-o://5845e6de92c3e440fe908579d4f6378b0ca48a14eca3e44d1f65d02d605ad81e" gracePeriod=2 Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.422937 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-887dn/must-gather-vhv4d"] Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.807773 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-887dn_must-gather-vhv4d_6648e3ee-fe30-46f7-ba4b-f9957e6e18ac/copy/0.log" Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.809629 4909 generic.go:334] "Generic (PLEG): container finished" podID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerID="5845e6de92c3e440fe908579d4f6378b0ca48a14eca3e44d1f65d02d605ad81e" exitCode=143 Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.809675 4909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b3c6b181f0ebfae0cae2c3dd3b8c9d2269aaa73896008d4b318b151bc89cbb2" Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.869321 4909 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-887dn_must-gather-vhv4d_6648e3ee-fe30-46f7-ba4b-f9957e6e18ac/copy/0.log" Nov 26 09:50:28 crc kubenswrapper[4909]: I1126 09:50:28.873697 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:50:29 crc kubenswrapper[4909]: I1126 09:50:29.034242 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-must-gather-output\") pod \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " Nov 26 09:50:29 crc kubenswrapper[4909]: I1126 09:50:29.034472 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c47mg\" (UniqueName: \"kubernetes.io/projected/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-kube-api-access-c47mg\") pod \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\" (UID: \"6648e3ee-fe30-46f7-ba4b-f9957e6e18ac\") " Nov 26 09:50:29 crc kubenswrapper[4909]: I1126 09:50:29.043469 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-kube-api-access-c47mg" (OuterVolumeSpecName: "kube-api-access-c47mg") pod "6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" (UID: "6648e3ee-fe30-46f7-ba4b-f9957e6e18ac"). InnerVolumeSpecName "kube-api-access-c47mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:50:29 crc kubenswrapper[4909]: I1126 09:50:29.140732 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c47mg\" (UniqueName: \"kubernetes.io/projected/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-kube-api-access-c47mg\") on node \"crc\" DevicePath \"\"" Nov 26 09:50:29 crc kubenswrapper[4909]: I1126 09:50:29.260978 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" (UID: "6648e3ee-fe30-46f7-ba4b-f9957e6e18ac"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:50:29 crc kubenswrapper[4909]: I1126 09:50:29.345005 4909 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 26 09:50:29 crc kubenswrapper[4909]: I1126 09:50:29.817772 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-887dn/must-gather-vhv4d" Nov 26 09:50:30 crc kubenswrapper[4909]: I1126 09:50:30.510361 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" path="/var/lib/kubelet/pods/6648e3ee-fe30-46f7-ba4b-f9957e6e18ac/volumes" Nov 26 09:50:48 crc kubenswrapper[4909]: I1126 09:50:48.426929 4909 scope.go:117] "RemoveContainer" containerID="5845e6de92c3e440fe908579d4f6378b0ca48a14eca3e44d1f65d02d605ad81e" Nov 26 09:50:48 crc kubenswrapper[4909]: I1126 09:50:48.470187 4909 scope.go:117] "RemoveContainer" containerID="458b610cae6256dd0dbf2c99639e6f9e05a137a1001624c112161c1d9656cac3" Nov 26 09:51:07 crc kubenswrapper[4909]: I1126 09:51:07.300661 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:51:07 crc kubenswrapper[4909]: I1126 09:51:07.301244 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.895668 4909 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g4krq"] Nov 26 09:51:22 crc kubenswrapper[4909]: E1126 09:51:22.897997 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="extract-content" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898019 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="extract-content" Nov 26 09:51:22 crc kubenswrapper[4909]: E1126 09:51:22.898039 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerName="copy" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898045 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerName="copy" Nov 26 09:51:22 crc kubenswrapper[4909]: E1126 09:51:22.898081 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="extract-utilities" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898087 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="extract-utilities" Nov 26 09:51:22 crc kubenswrapper[4909]: E1126 09:51:22.898106 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerName="gather" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898113 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerName="gather" Nov 26 09:51:22 crc kubenswrapper[4909]: E1126 09:51:22.898122 4909 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="registry-server" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898128 4909 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="registry-server" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898331 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d7d90b-3239-4628-91eb-0dce2cce5663" containerName="registry-server" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898345 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerName="copy" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.898374 4909 memory_manager.go:354] "RemoveStaleState removing state" podUID="6648e3ee-fe30-46f7-ba4b-f9957e6e18ac" containerName="gather" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.900966 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:22 crc kubenswrapper[4909]: I1126 09:51:22.916619 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4krq"] Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.093038 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gjh7\" (UniqueName: \"kubernetes.io/projected/1b8acf19-0399-4cc0-adaf-1a3135aa967d-kube-api-access-9gjh7\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.093123 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-catalog-content\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.093203 4909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-utilities\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.195526 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-catalog-content\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.195631 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-utilities\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.195775 4909 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gjh7\" (UniqueName: \"kubernetes.io/projected/1b8acf19-0399-4cc0-adaf-1a3135aa967d-kube-api-access-9gjh7\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.196031 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-catalog-content\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.196098 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-utilities\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.219370 4909 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gjh7\" (UniqueName: \"kubernetes.io/projected/1b8acf19-0399-4cc0-adaf-1a3135aa967d-kube-api-access-9gjh7\") pod \"redhat-marketplace-g4krq\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.220933 4909 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:23 crc kubenswrapper[4909]: I1126 09:51:23.717755 4909 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4krq"] Nov 26 09:51:24 crc kubenswrapper[4909]: I1126 09:51:24.470661 4909 generic.go:334] "Generic (PLEG): container finished" podID="1b8acf19-0399-4cc0-adaf-1a3135aa967d" containerID="afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120" exitCode=0 Nov 26 09:51:24 crc kubenswrapper[4909]: I1126 09:51:24.470726 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4krq" event={"ID":"1b8acf19-0399-4cc0-adaf-1a3135aa967d","Type":"ContainerDied","Data":"afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120"} Nov 26 09:51:24 crc kubenswrapper[4909]: I1126 09:51:24.470798 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4krq" event={"ID":"1b8acf19-0399-4cc0-adaf-1a3135aa967d","Type":"ContainerStarted","Data":"24b87527a25c3b3d3f6cffb47bcf946d11569fa567efe3351700c659ec801d90"} Nov 26 09:51:24 crc kubenswrapper[4909]: I1126 09:51:24.473939 4909 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 09:51:25 crc kubenswrapper[4909]: I1126 09:51:25.487104 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4krq" event={"ID":"1b8acf19-0399-4cc0-adaf-1a3135aa967d","Type":"ContainerStarted","Data":"b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f"} Nov 26 09:51:26 crc kubenswrapper[4909]: I1126 09:51:26.509031 4909 generic.go:334] "Generic (PLEG): container finished" podID="1b8acf19-0399-4cc0-adaf-1a3135aa967d" containerID="b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f" exitCode=0 Nov 26 09:51:26 crc kubenswrapper[4909]: I1126 09:51:26.524827 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4krq" event={"ID":"1b8acf19-0399-4cc0-adaf-1a3135aa967d","Type":"ContainerDied","Data":"b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f"} Nov 26 09:51:27 crc kubenswrapper[4909]: I1126 09:51:27.523297 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4krq" event={"ID":"1b8acf19-0399-4cc0-adaf-1a3135aa967d","Type":"ContainerStarted","Data":"eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9"} Nov 26 09:51:27 crc kubenswrapper[4909]: I1126 09:51:27.556173 4909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g4krq" podStartSLOduration=2.967466468 podStartE2EDuration="5.55615135s" podCreationTimestamp="2025-11-26 09:51:22 +0000 UTC" firstStartedPulling="2025-11-26 09:51:24.473519022 +0000 UTC m=+10256.619730218" lastFinishedPulling="2025-11-26 09:51:27.062203934 +0000 UTC m=+10259.208415100" observedRunningTime="2025-11-26 09:51:27.543627855 +0000 UTC m=+10259.689839031" watchObservedRunningTime="2025-11-26 09:51:27.55615135 +0000 UTC m=+10259.702362516" Nov 26 09:51:33 crc kubenswrapper[4909]: I1126 09:51:33.221636 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:33 crc kubenswrapper[4909]: I1126 09:51:33.222857 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:33 crc kubenswrapper[4909]: I1126 09:51:33.307885 4909 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:33 crc kubenswrapper[4909]: I1126 09:51:33.664610 4909 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:33 crc kubenswrapper[4909]: I1126 09:51:33.742731 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4krq"] Nov 26 09:51:35 crc kubenswrapper[4909]: I1126 09:51:35.616945 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g4krq" podUID="1b8acf19-0399-4cc0-adaf-1a3135aa967d" containerName="registry-server" containerID="cri-o://eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9" gracePeriod=2 Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.132335 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.184113 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-catalog-content\") pod \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.184175 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gjh7\" (UniqueName: \"kubernetes.io/projected/1b8acf19-0399-4cc0-adaf-1a3135aa967d-kube-api-access-9gjh7\") pod \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.184279 4909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-utilities\") pod \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\" (UID: \"1b8acf19-0399-4cc0-adaf-1a3135aa967d\") " Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.185451 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-utilities" (OuterVolumeSpecName: "utilities") pod "1b8acf19-0399-4cc0-adaf-1a3135aa967d" (UID: "1b8acf19-0399-4cc0-adaf-1a3135aa967d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.196301 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8acf19-0399-4cc0-adaf-1a3135aa967d-kube-api-access-9gjh7" (OuterVolumeSpecName: "kube-api-access-9gjh7") pod "1b8acf19-0399-4cc0-adaf-1a3135aa967d" (UID: "1b8acf19-0399-4cc0-adaf-1a3135aa967d"). InnerVolumeSpecName "kube-api-access-9gjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.210988 4909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b8acf19-0399-4cc0-adaf-1a3135aa967d" (UID: "1b8acf19-0399-4cc0-adaf-1a3135aa967d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.286882 4909 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.286910 4909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gjh7\" (UniqueName: \"kubernetes.io/projected/1b8acf19-0399-4cc0-adaf-1a3135aa967d-kube-api-access-9gjh7\") on node \"crc\" DevicePath \"\"" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.286919 4909 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8acf19-0399-4cc0-adaf-1a3135aa967d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.627883 4909 generic.go:334] "Generic (PLEG): container finished" podID="1b8acf19-0399-4cc0-adaf-1a3135aa967d" containerID="eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9" exitCode=0 Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.627929 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4krq" event={"ID":"1b8acf19-0399-4cc0-adaf-1a3135aa967d","Type":"ContainerDied","Data":"eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9"} Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.628117 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4krq" event={"ID":"1b8acf19-0399-4cc0-adaf-1a3135aa967d","Type":"ContainerDied","Data":"24b87527a25c3b3d3f6cffb47bcf946d11569fa567efe3351700c659ec801d90"} Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.628136 4909 scope.go:117] "RemoveContainer" containerID="eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.627966 4909 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4krq" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.657752 4909 scope.go:117] "RemoveContainer" containerID="b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.677359 4909 scope.go:117] "RemoveContainer" containerID="afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.679734 4909 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4krq"] Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.689415 4909 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4krq"] Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.728226 4909 scope.go:117] "RemoveContainer" containerID="eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9" Nov 26 09:51:36 crc kubenswrapper[4909]: E1126 09:51:36.728690 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9\": container with ID starting with eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9 not found: ID does not exist" containerID="eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.728721 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9"} err="failed to get container status \"eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9\": rpc error: code = NotFound desc = could not find container \"eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9\": container with ID starting with eef1ff0240dacfe2c4b82af9ce02a2d5580fec5d479b63c1b79eb43b44e9e6c9 not found: ID does not exist" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.728743 4909 scope.go:117] "RemoveContainer" containerID="b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f" Nov 26 09:51:36 crc kubenswrapper[4909]: E1126 09:51:36.729616 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f\": container with ID starting with b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f not found: ID does not exist" containerID="b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.729654 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f"} err="failed to get container status \"b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f\": rpc error: code = NotFound desc = could not find container \"b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f\": container with ID starting with b44871e9db829e96d696b9ca83d0c2946014c0afa92b7e4a154a0611e3069d3f not found: ID does not exist" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.729680 4909 scope.go:117] "RemoveContainer" containerID="afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120" Nov 26 09:51:36 crc kubenswrapper[4909]: E1126 09:51:36.729940 4909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120\": container with ID starting with afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120 not found: ID does not exist" containerID="afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120" Nov 26 09:51:36 crc kubenswrapper[4909]: I1126 09:51:36.729974 4909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120"} err="failed to get container status \"afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120\": rpc error: code = NotFound desc = could not find container \"afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120\": container with ID starting with afb2ba46bd0de83fbc8a19e7cb17bec4fa04b6b18ec72a6099321eb927764120 not found: ID does not exist" Nov 26 09:51:37 crc kubenswrapper[4909]: I1126 09:51:37.300894 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:51:37 crc kubenswrapper[4909]: I1126 09:51:37.300975 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:51:38 crc kubenswrapper[4909]: I1126 09:51:38.515907 4909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8acf19-0399-4cc0-adaf-1a3135aa967d" path="/var/lib/kubelet/pods/1b8acf19-0399-4cc0-adaf-1a3135aa967d/volumes" Nov 26 09:51:56 crc kubenswrapper[4909]: I1126 09:51:56.448249 4909 trace.go:236] Trace[948109214]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (26-Nov-2025 09:51:55.385) (total time: 1060ms): Nov 26 09:51:56 crc kubenswrapper[4909]: Trace[948109214]: [1.060736165s] [1.060736165s] END Nov 26 09:52:07 crc kubenswrapper[4909]: I1126 09:52:07.301250 4909 patch_prober.go:28] interesting pod/machine-config-daemon-4lffv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 09:52:07 crc kubenswrapper[4909]: I1126 09:52:07.301789 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 09:52:07 crc kubenswrapper[4909]: I1126 09:52:07.301835 4909 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" Nov 26 09:52:07 crc kubenswrapper[4909]: I1126 09:52:07.302562 4909 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef342f58a0c945ed5951507c251ec85634dba79cb98f3708ddc79cb961a6ae1c"} pod="openshift-machine-config-operator/machine-config-daemon-4lffv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 09:52:07 crc kubenswrapper[4909]: I1126 09:52:07.302627 4909 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" podUID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerName="machine-config-daemon" containerID="cri-o://ef342f58a0c945ed5951507c251ec85634dba79cb98f3708ddc79cb961a6ae1c" gracePeriod=600 Nov 26 09:52:08 crc kubenswrapper[4909]: I1126 09:52:08.046637 4909 generic.go:334] "Generic (PLEG): container finished" podID="602939ce-1411-4a17-a42f-719afb7c6dd9" containerID="ef342f58a0c945ed5951507c251ec85634dba79cb98f3708ddc79cb961a6ae1c" exitCode=0 Nov 26 09:52:08 crc kubenswrapper[4909]: I1126 09:52:08.046733 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerDied","Data":"ef342f58a0c945ed5951507c251ec85634dba79cb98f3708ddc79cb961a6ae1c"} Nov 26 09:52:08 crc kubenswrapper[4909]: I1126 09:52:08.047198 4909 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4lffv" event={"ID":"602939ce-1411-4a17-a42f-719afb7c6dd9","Type":"ContainerStarted","Data":"9dd4f929b05d2688acc77746b34596826dbd860b44aae43e4c10d769d415a348"} Nov 26 09:52:08 crc kubenswrapper[4909]: I1126 09:52:08.047223 4909 scope.go:117] "RemoveContainer" containerID="6f2acd8a108db43494f62310d8f7fc08bd568470b063ced26a67ce67fb598800" Nov 26 09:53:07 crc kubenswrapper[4909]: I1126 09:53:07.738017 4909 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-xt76w" podUID="2c2c78bd-80a9-4543-b1d1-432d3a29d3e5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500"